diff --git a/.markdownlint.yml b/.markdownlint.yml index 3aed93b6..251c2455 100644 --- a/.markdownlint.yml +++ b/.markdownlint.yml @@ -16,3 +16,6 @@ MD033: false # MD034/no-bare-urls - Bare URL MD034: false + +# MD036/no-emphasis-as-heading/no-emphasis-as-header +MD036: false diff --git a/delegates.txt b/delegates.txt index 3595b3fe..75a6d9f1 100644 --- a/delegates.txt +++ b/delegates.txt @@ -14,6 +14,7 @@ Alessia Bellisario (AMB) Alex Rattray (ARY) Alex Russell (AR) Alex Vincent (AVT) +Alexander Akait (AAT) Alexey Shvayka (ASH) Aliaksander Palpko (APO) Alissa Renz (ARZ) @@ -51,6 +52,7 @@ Aysegul Yonet (AYS) Ben Allen (BAN) Ben Coe (BCE) Ben Lichtman (BLN) +Ben Lickly (BLY) Ben Newman (BN) Ben Smith (BS) Ben Titzer (BTR) @@ -61,6 +63,7 @@ Bert Belder (BBR) Bill Ticehurst (BTT) Bob Myers (RTM) Boris Zbarsky (BZ) +Boshen Chen (BCN) Brad Decker (BDR) Brad Green (BG) Brad Nelson (BNN) @@ -89,6 +92,7 @@ Christian Ulbrich (CHU) Christian Wirth (CWH) Christoph Nakazawa (CNA) Christopher Blappert (CBT) +Christopher Hiller (CHR) Chyi Pin Lim (LCP) Clark Sampson (CSN) Claude Pache (CPE) @@ -110,6 +114,7 @@ Dan Minor (DLM) Dan Stefan (DAS) Daniel Clifford (DCD) Daniel Ehrenberg (DE) +Daniel Griesser (DGR) Daniel Rosenwasser (DRR) Daniel Veditz (DVE) Daniele Bonetta (DBA) @@ -139,6 +144,7 @@ Edgar Barragan (EDB) Edward Yang (EY) Eemeli Aro (EAO) Eli Grey (EGY) +Elliot Goodrich (EGH) Emily Huynh (EHH) Eric Faust (EFT) Eric Ferraiuolo (EF) @@ -149,6 +155,7 @@ Erica Pramer (EPR) Erik Arvidsson (EA) Erik Marks (REK) Ethan Arrowood (EAD) +Even Stensberg (ESG) Fabio Rocha (FRA) Federico Bucchi (FED) Felipe Balbontín (FBN) @@ -175,6 +182,7 @@ Gus Caplan (GCL) Guy Bedford (GB) He Zhou (HZO) Hemanth HM (HHM) +Henri Sivonen (HJS) Henry Zhu (HZU) Holger Benl (HBL) Hubert Manilla (HBM) @@ -234,6 +242,7 @@ John McCutchan (JMC) John Neumann (JN) John Pampuch (JP) John-David Dalton (JDD) +Jonas Haukenes (JHS) Jonas Kruckenberg (JKR) Jonathan Dallas (JDS) Jonathan Keslin (JKN) @@ -246,6 +255,7 @@ Jorge Lopez (JEL) Jory Burson (JBN) Jose David Rodrigues Veloso (JVO) Josh Blaney (JPB) +Josh Goldberg (JKG) Joshua Peek (JPK) Joshua S. Choi (JSC) Jovonni Smith-Martinez (JSM) @@ -298,10 +308,12 @@ Luigi Liquori (LLI) Luis Fernando Pardo Sixtos (LFP) Lukas Stadler (LSR) Luke Hoban (LH) +Luoming Zhang (LZG) Lyza Gardner (LGR) Maël Nison (MNN) Maggie Pint (MPT) Manuel Jasso (MJN) +Manuel Serrano (MSR) Marco Ippolito (MIO) Mariko Kosaka (MKA) Marja Hölttä (MHA) @@ -328,6 +340,7 @@ Michael Hablich (MHH) Michael Saboff (MLS) Michael Z Goddard (MZG) Michal Hollman (MHN) +Michal Mocny (MNY) Michele Riva (MRA) Mike Murry (MMY) Mike Pennisi (MP) @@ -362,6 +375,7 @@ Noah Tye (NTE) Norbert Lindenberg (NL) Oliver Hunt (OH) Oliver Medhurst (OMT) +Olivier Flückiger (OFR) Pablo Gorostiaga Belio (PGO) Paolo Severini (PSI) Patrick Soquet (PST) @@ -399,6 +413,7 @@ Rick Hudson (RH) Rick Markins (RMS) Rick Waldron (RW) Riki Khorana (RKA) +Rishipal Singh Bhatia (RBA) Rob Palmer (RPR) Robert Pamely (RPY) Robin Morisset (RMT) @@ -408,6 +423,7 @@ Romulo Cintra (RCA) Ron Buckton (RBN) Rongjian Zhang (ZRJ) Ross Kirsling (RKG) +Ruben Bridgewater (RBR) Ryan Hunt (RLH) Ryuichi Hayashida (RHA) Saam Barati (SBI) @@ -450,9 +466,11 @@ Sri Pillalamarri (SPI) Staffan Eketorp (SEP) Staś Małolepszy (STM) Stefan Penner (SP) +Stephen Hicks (SHS) Stephen Murphy (SMY) Steve Faulkner (SFR) Steven Loomis (SRL) +Steven Salat (STY) Subo Zheng (SZH) Sukyoung Ryu (SRU) Sulekha Kulkarni (SKI) @@ -524,6 +542,7 @@ YuBei Li (YLI) Yulia Startsev (YSV) Yusuke Suzuki (YSZ) Zalim Bashorov (ZBV) +Zbigniew Tenerowicz (ZTZ) Zeimin Lei (LZM) Zev Solomon (ZSN) Zeyu Yang (ZYY) diff --git a/meetings/2023-09/september-28.md b/meetings/2023-09/september-28.md index 3a398bf5..b1ff7475 100644 --- a/meetings/2023-09/september-28.md +++ b/meetings/2023-09/september-28.md @@ -1284,7 +1284,7 @@ SFC: I think that the what we are kind of after here is, we want a new concept b RPR: DLM agrees with Shane. Ross has a new suggestion. -RKN: There’s traction on the chat, *e* had the advantage of being a constant that’s between 2 and 3, and I was thinking, well, if this stage is primarily for Test262 afterall, consider 2.62. Anyway, I am saying that out loud. +RKG: There’s traction on the chat, *e* had the advantage of being a constant that’s between 2 and 3, and I was thinking, well, if this stage is primarily for Test262 afterall, consider 2.62. Anyway, I am saying that out loud. RPR: All right. We have spent at least half an hour bikeshedding. On to Eemeli. diff --git a/meetings/2024-10/october-08.md b/meetings/2024-10/october-08.md index ecd29ccd..cb219873 100644 --- a/meetings/2024-10/october-08.md +++ b/meetings/2024-10/october-08.md @@ -1279,6 +1279,8 @@ CDA: There is nothing in the queue. BAN: Okay. I guess I would like to ask for that conditional acceptance for this one. I would like to ask for consensus and receive conditional acceptances +### Conclusion + CDA: Consensus for the three normative changes? CDA: Other than the conditional support signalled by my Mozilla friends, any other voices of support for normative changes? @@ -1289,18 +1291,6 @@ CDA: All right. So it’s all riding on Mozilla now. BAN: Thank you very much. I would to say again, even though it’s kind of put me in an awkward position, I am really glad that you all are sticking to the process on this one -### Speaker's Summary of Key Points - -- List -- of -- things - -### Conclusion - -- List -- of -- things - ## Exploring an Idea of a Proposal Management and Technical Arbitration Tool Presenter: Mikhail Barash (MBH) diff --git a/meetings/2024-10/october-09.md b/meetings/2024-10/october-09.md index 6df74768..28fab119 100644 --- a/meetings/2024-10/october-09.md +++ b/meetings/2024-10/october-09.md @@ -1341,19 +1341,11 @@ JHD: Thank you, MM. RPR: + 1 from Chris. -RPR: Any .… + 1 from Devon. + 1 from Chip. Do you want to speak? Chip has wanted this for years. All right. Just as to check, do we have any objections to Stage 2.7? Congratulations, JHD. You have Stage 2.7. - -### Speaker's Summary of Key Points - -- List -- of -- things +RPR: Any .… + 1 from Devon. + 1 from Chip. Do you want to speak? Chip has wanted this for years. All right. ### Conclusion -- List -- of -- things +RPR: Just as to check, do we have any objections to Stage 2.7? Congratulations, JHD. You have Stage 2.7. ## Restricting subclassing of built-ins diff --git a/meetings/2024-12/december-02.md b/meetings/2024-12/december-02.md new file mode 100644 index 00000000..f224488c --- /dev/null +++ b/meetings/2024-12/december-02.md @@ -0,0 +1,1004 @@ +# 105th TC39 Meeting | 2nd December 2024 + +----- + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Daniel Ehrenberg | DE | Bloomberg | +| Istvan Sebestyen | IS | Ecma | +| Jordan Harband | JHD | HeroDevs | +| Dmitry Makhnev | DJM | JetBrains | +| Chris de Almeida | CDA | IBM | +| Sergey Rubanov | SRV | Invited Expert | +| Michael Saboff | MLS | Apple | +| Jesse Alama | JMN | Igalia | +| Andreu Botella | ABO | Igalia | +| Jirka Marsik | JMK | Oracle | +| Rob Palmer | RPR | Bloomberg | +| Eemeli Aro | EAO | Mozilla | +| Josh Goldberg | JKG | Invited Expert | +| Aki Rose Braun | AKI | Ecma International | +| Ron Buckton | RBN | Microsoft | +| Luca Forstner | LFR | Sentry | +| Mikhail Barash | MBH | Univ. Bergen | +| Ujjwal Sharma | USA | Igalia | +| J. S. Choi | JSC | Invited Expert | +| Linus Groh | LGH | Bloomberg | +| Keith Miller | KM | Apple | +| Richard Gibson | RGN | Agoric | +| James M Snell | JSL | Cloudflare | +| Samina Husain | SHN | Ecma International | +| Devin Rousso | DRO | Invited Expert | +| Nicolo Ribaudo | NRO | Igalia | +| Jan Olaf Martin | JOM | Google | +| Daniel Minor | DLM | Mozilla | +| Philip Chimento | PFC | Igalia | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +RPR: Welcome everyone to the 105th TC39 meeting. It’s labelled 106th in the meeting notes. That’s my fault changing the name. I can see the transcription is beginning which is excellent. Before we start, could I get a couple of volunteers to assist with the note taking to polish up the notes as we go. I’ll get started with the slides, then. Here we go. So welcome everyone. We are here with our remote meeting today. And so let’s begin. Are these slides working? There we go. So you know who we all are, I’m rob one of the three chairs that we have here today. We also have Ujjwal and Chris in the meeting and assisted by the three facilitators. I’m not sure if any are here at the moment. But we have Brian, Justin, and Yulia who help us out with running the meetings. So if you have any requests or any curiosity, please do reach out to us at any time. We try to keep the meeting on time and give everyone a chance to speak using our TCQ tool which I’ll get to. Before we begin, hopefully the way that you all got here today was through the meeting entry form. So the reflector links to this. If you found your way here through any other means, for example, someone sharing the URL direct, please do return back to the reflect and make sure you sign in the form. This is an Ecma requirement that we take attendance. + +We have a code of conduct. This can be found on the main TC39.es site. Please do give it a read and do your best to stick to the spirit of the document and with the best faith interpretation and if you have any concerns or any issues that come up, you can always reach out to us chairs direct, we’re available direct on matrix or if you need to, you can reach out to the code of conduct committee and these can be kept confidential. We are having a remote meeting this week which means we have four days and that’s broken up into a morning session or a.m. session and p.m. session. Of course, that depends on your TimeZone. + +We’re on mountain time this week. So that is UTC -7. For communicating during the meeting we are using our regular tools. So primarily that is TCQ. TCQ I think we were just getting that linked from the reflector. Do you know, Chris, is this now available on the reflector? + +CDA: Yes, it’s available on reflector and I also posted it in the meeting chat. Still being populated, but it’s up. + +RPR: Awesome. And so we use this tool to manage both our agenda and discussions. You can see what’s coming up. Let’s go through some of the controls. So you’ll see if you switch to the view where you see the current item, we have the name of the current item, then within that, there will be a topic when someone has proposed a topic to discuss. And within that will be a current person speaking. When you’re using this tool and if you’re actually speaking, you will see an extra button called I’m done speaking. So when you have finished saying your piece and wish to move on with the conversation, please do click this button or otherwise the chairs will click it when they see appropriate. And then on the actual buttons you see there, please prefer to use the buttons on the left, so the blue ones, so the new topic, and discuss current topic. Those are preferred. The ones on the right will generally interrupt the conversation or will be increasingly urgent. So you’re allowed to ask clarifying questions at any point. If you really need to stop the discussion urgently, choose point of order such as I can’t hear anything. You’re muted, that kind of thing. We use for synchronous realtime chat, we have matrix. Our better version of IRC. It’s a little bit like slack and discord. So hopefully you’re all signed up there. Primarily we use the TC39 delegates room for talking about work and everything that is on topic. If you have things that are off topic, then please keep them in the temporal dead zone. That is the place for any conversations about Pokmon or joking or puns or that kind of thing. We have an IPR policy. So to make sure that everything is clean and so on, everyone here is expected to be into a particular category. Most people, the standard original classification is an ECMA member. That way you have delegates people from the ECMA member organization and everyone here who is in that status has, you know, they’re company has already signed the agreements when they joined. Otherwise, we have the concept of invited experts which is a formal process by which people can be invited to join. And as part of that equally you will have signed the forms. If you are not in either of those categories, then we expect that you are perhaps an observer. Normally notified on the TC39 reflector and in advance, you are welcome to observe. Please do not talk. Because that’s the principle of being a signed up—if you haven’t yet signed the agreements. We also have transcription running. So I will just read out this so that everyone is fully aware that a detailed transcript of the meeting is being prepared and will eventually be posted on GitHub. You may edit this at any time during the meeting in Google Docs for accuracy including deleting comments which you do not wish to appear. And you may also request corrections or deletions after the fact by editing the Google Doc in the first two weeks after the TC39 meeting or subsequently making GitHub or contacting the chairs. The next meeting after this, the 106th will be in February next year. Some of us will be going to Seattle as kindly hosted by F5. We were there roughly two years ago or so. So some of you may remember. I don’t know. Michael, is it in the same place as last time? + +MF: Yes, it is. + +RPR: Okay. So having attended it previously, it was an awesome place to visit. So please do join us for that. The survey for that, the interest survey, is currently open. We have already seen lots of interest. So you can see who else is planning to go. Let’s return to the opportunity to volunteer as a note taker. We will make this request at the start of each session hoping for volunteers. + +RPR: First of all, hopefully everyone has reviewed the previous minutes. Are there any objections to approving the previous minutes? Silence means no objections. They are approved. Next we have our current agenda. Are there any objections against proceeding with the current agenda? None? Okay. We have adopted the agenda. So first up, we have SHN with the secretary’s report. + +## Secretary’s Report + +Presenter: Samina Husain (SHN) + +- (no proposal) +- (no link to slides) + +SHN: Also want to thank everybody for all the efforts. It’s been a very busy year. You had in June, your new edition. There’s been lots of work going on. So all those efforts are very much appreciated. I also want to recognize and thank AKI who supports me and you on the secretariat for all her work she has done in the months past. Just want to make those small recognitions. + +SHN: Just some of the topics I would like to cover today, I would like to go through some of the new projects we are working on, some conversations I’m having at W3C for the source map that I think closes very soon. A lot of work done there. Confirmation of the chairs and editors. And a comment on IETF and a short comment on the invited experts. And then as per usual, there’s always the general overview of the invited experts. I always like to repeat the code of conduct. I think mentioned by RPR and then some documents and some dates. + +SHN: So first for recognitions, I want to first bring this up. So CDA has been recognized as a pathfinder for security. I want to congratulate you on this nomination and winning this prestigious recognition. That’s wonderful and it’s great because you’re also so involved with Ecma. Very much pleased to announce that to everybody who didn’t know. And secondly, I want to thank all of you for giving me the opportunity to be recognized and thank you for all of that because I understand that much of my recognition is a result of a lot of work that you all do in TC39. So you play a big role in this nomination. So the energy and professionalism that comes, actually comes from all of you. So thank you for giving me this honour. + +SHN: So moving on to a little bit of the new activities. So TC55 has been a conversation that has been going on for some months as many of you are aware. We had lots of to and fro regarding the scope and the work that will continue whether it’s in W3C or move into Ecma. It is moving forward slowly but surely to move the entire WinterCG work into Ecma. The committee that will be formed will be TC55. The scope had a lot of conversation. Over the last weeks we had a number of meetings to really fine tune the scope and address a lot of the comments that came from the ExeCom and other members of TCs and thank you for all the work. LCA and OMT and AKI and others on the call. So forgive me if I’ve forgotten your name or didn’t mention your name. We did a lot of work. The scope looks quite fine. It will be proposed and discussed at the GA coming up in ten days. + +SHN: TC56 is another new proposal. It is the first one covering artificial intelligence. It has been proposed by IBM and other members involved Purdue university and Microsoft to just name the first three and others that will be interested. I wanted to bring it to your attention that perhaps organizations that are involved in may find interest and seek to participate. This will be discussed in the GA coming up. We had the initial proposal already at the last ExeCom. It’s good to see new work coming into Ecma. + +SHN: I have also mentioned this particular—I don’t have a TC number to it. It hasn’t yet been officially formalized the high-level shading language HLSL is proposed by Microsoft and there is interest from other members. Microsoft just needs a little bit more time to work this through to the management. So they will be proposing this probably in the new year in the Execom and if we haven’t had any others it will be T C57 and those within TC39 find that of interest within your organization. Just to keep you aware. + +SHN: I spoke about TC55, and the work we’re doing to move WinterCG into Ecma and to bring it here has generated also a lot of conversation with W3C at the broader scope. At the last TPAC meeting AKI had an opportunity to attend and meet a lot of people. I believe he had given an update in the last plenary in Tokyo. I wasn’t on the call. I wanted to bring this topic up again. It come up in the conversation recently with the W3C folks and would like to know if Ecma TC39 would like to participate in the horizontal review that takes place in W3C and I’m going to pause there and ask AKI to add a bit more detail to the conversation. + +AKI: I mentioned the horizontal review kind of briefly last plenary. We were on a tight schedule so I tried to breeze through as quickly as possible. The way it works is W3C has impressive tooling around GitHub where they track cross-cutting concerns within W3C. The tooling will open issues on both repos for follow-up. So say the i18n working group has something come up that will be relevant to the privacy interest group, the tooling will open an issue within the appropriate repo for the privacy interest group as well as that for the i18n working group requesting a review that can then be followed up on, tagged, discussed, and upon satisfactory conclusion of conversation closed. + +AKI: There is nothing involving formal obligation in terms of horizontal reviews. They are not “we reviewed the thing and therefore you must change it”. It’s an informative move making sure groups know what each other are up to and making sure that nothing is in conflict. I think it sounds like a great idea on its face. It is certainly something I would like to ease our way into if we wanted to pursue it—I don’t think we need to immediately be hooked into hundred percent of the automation into tooling. I do think being able to both request reviews and have reviews requested of us, by us would be a good way to solidify a relationship with W3C and make sure anything that we’re doing is beneficial for the web and making sure that nobody is building something that conflicts with what we are up to. + +SHN: Okay, thank you AKI. I can certainly field some questions with that. I just have a couple of slides and then we can go through that. Okay, a few other items. We have our GA coming up on December 11th and 12th. The opt out period for TC39 TG4 source map first edition will end the day before. First congratulations to the team, to the subcommittee working on source maps. Great work. I have received the final standard, the first edition. It is uploaded to the GA folder so the GA members can read it. It is also uploaded to the TC39 folder. I believe you have seen the final PDF that was created. There are two minor editorials. Two letters that need to be small letters and very minor before published for review. My expectation is at the GA review and approve it. I hope the members at the GA had enough time. They had the first draft already uploaded sometime ago. The final edition to be approved has been uploaded for them. + +SHN: I had some questions regarding TC39 and IETF liaison. My question to the committee is, are you aware of your status with IETF? And if so, is there a TC39 representative that is a liaison to IETF? Because it would be good for us to have a short exchange of information, maybe give them a short report of what is going on just to keep this relationship at IETF and TC39 active? And I will pause for some comments on that after I just finished my couple slides if that’s okay. + +SHN: For invited experts at the end of every year around now, I review the invited experts list we have, this is just to confirm that everyone is still active and interested and relative to be working forward. I do like to touch a little bit to each of the invited experts that are part of the organization to see if they’re still interested or if there’s a potential membership opportunity. So some of you may see an email from me regarding that. Otherwise, with the TC39 chairs, I just would like to have a short confirmation that our current list of invited experts are still invited experts that are relevant and valid for the work going on in TC39? + +SHN: I also want to thank everybody for their nomination. So there have been a number of nominations that have come. Many of you are in TC39, that’s excellent. It’s great to see the activity. The ExeCom nomination seats are only four. We had a lot of nominations and seven total. We will have a vote. I do understand that we may want to consider that to be different in the future. For the upcoming GA coming up, we will have a vote. So your activity and your interest are very, very much appreciated as we move on to building our ExeCom. + +SHN: Something that we also do at the end of the year typically or start of the year, I think I have at the start of this year, but I just want to confirm that the chairs that I listed here, the editors that I listed here, are the individuals that will continue on in 2025. I will list them also on the Ecma documents and Ecma website. If I have made an error or I need a correction, please advise me. This is the list that I have based on what we did from 2024. + +SHN: In the annex, I will run through it quickly and then stop for questions. It is the usual invited experts rules and conditions. Our code of conduct rules and regulations. I want to thank everybody to continue to give the summary and conclusions. That makes a huge help through the minutes and I appreciate that you take the time to do that. The document list that we have are there for your reference. You may access it through your chairs. And I listed there the title of the TC39 documents that have been recently published after your last plenary meeting and also GA documents that have been listed since the last meeting. So have a look through that. Anything specific you would like through your chairs, you can access that. I see that the dates are set for the meetings for next year for TC39. I hope I got that right. So that’s great. Thank you so much to the hosts that are going to be hosting it for the three times next year. F5, I look forward to being able to attend all of them. + +CDA: Sorry to interrupt. If you go back to the slide. To the dates, I think if I’m not mistaken the Igalia dates are incorrect by a month. I think you had June on there. It’s in May. + +SHN: Yes. I will correct that. Apologies. It should be May. I knew that. I just didn’t know how to count this morning. + +SHN: Then of course the dates that are currently set for our general assembly and ExeCom and keeping in mind with the election and potential new members on the management, these dates could adjust a little bit based on everybody’s availability. This is what is tentatively set for now. Those are the venues. I think that is my very last slide. Thank you very much. I’m going to stop sharing and open for any questions. + +DE: Minor clarification. For the ExeCom, there are three parts the officers and vice president and president and treasurer as well as eight ordinary member slots and only three candidates. So all three of those from IBM and Apple and Google will be there. I’m very happy about Apple and Google joining this. I think this will be really great for Ecma management. For non-ordinary members there are four slots. We recently expanded this from two and there are seven candidates. Wanted to apologize because I—you know, I pitched this to a number of people. I’m really happy that people have signed up as candidates. Historically this wasn’t competitive for a long time and now this is, I think. I would like to consider in the middle of next year, at the following GA, allowing for additional slots for non-ordinary members when all of the ordinary member slots are taken in the ExeCom and something we can discuss in the future. Apologies for this being unexpectedly a vote. And Bloomberg are hosting the Ecma GA in just one week. So if you’re planning on attending, please fill out the Doodle for that. This is a hybrid meeting. It’s open to all Ecma members, not only ordinary members. I encourage you to attend remotely if you would like to. If you’re the designated representative from your member organization. So please get in touch with Samina or me if you’re interested in attending. Thank you. + +SHN: Thanks Dan. All of you who have nominated, I sent you the link. All of you should have received the invitation. And thank you for the update. We will discuss how to better have the ability and engagement from others in the event that the seats are not filled by the ordinary members. Are there any other questions? + +CDA: There’s nothing in the queue at the moment. + +SHN: Great. Thank you very much. I will update the slide with the correct dates and give it back to you, Rob. Thank you. + +## ECMA262 Status Updates + +Presenter: Michael Ficarra (MF) + +- [slides](https://docs.google.com/presentation/d/1IS6hsFker8TM_mPtK1VQbFCH2TK3LljOxFu6-zMCjkM/edit) + +MF: Pretty quick update on 262 editorial stuff. So normative changes, the first one here is a needs-consensus PR that we agreed to at the last meeting. We merged this change to `toSorted` to make it stable, as is already required by `Array.prototype.sort`. This was an oversight in the integration of `toSorted` as things changed with the Array sort stability specification at the same time. And the rest are Stage 4 proposal integrations: `Promise.try`, iterator helpers, and duplicate named capture groups. There were plenty of editorial changes, but none that need to be called out to plenary. And the list of upcoming and planned editorial work is the same. We should probably review it sometime soon just to make sure this is what our plan is going forward. But for now nothing has changed there. And that’s it. + +## Test262 Status Updates + +Presenter: Philip Chimento (PFC) + +- (no slides presented) + +PFC: Test262 has landed support for a bunch of new proposals thanks to many of the champions such as deferred imports, `Promise.try`, the iterator one whose name escapes me at the moment. So this is what we like to see, champions participating in the writing of tests and in reviewing tests that other people have written, this is really helpful. We are continuing to look for sustainable funding for the maintenance of Test262. We’ll let you know when we have any updates on that, but if you have tips, please let us know. We’re very interested in avenues for keeping the current level of involvement where it is. + +RPR: Just checking, are you meant to be showing any slides? + +PFC: No, no slides. I believe that’s it. + +RPR: All right. Any questions for PFC? No, okay. Thank you PFC. We are making excellent progress through the agenda. Things are moving quicker than normal which is a hint to the fellow chairs to bring things forward. + +## TG3 (Security) Updates + +Presenter: Jordan Harband (JHD) + +- (no slides) + +JHD: So we continue to discuss the security aspects of multiple proposals at various stages. We don’t have anything concrete to talk about this plenary. But we will continue to review and hopefully surface useful feedback. + +## TG4 (Source Maps) Updates + +Presenter: Nicolò Ribaudo (NRO) + +- (no proposal) +- [slides](https://docs.google.com/presentation/d/1uzimn85ojU0TOdiFB1s5VZG_aT7xw8uf646hgnDNQ3w/edit#slide=id.g31b69470253_0_0) + +NRO: The very good update is we now have a spec number. Source maps will be ECMA-424 if I get it right, and as mentioned – + +AKI: 426. + +NRO: It’s 426. Sorry about this. As SHN mentioned before, you can find the latest PDF draft in the Ecma drive at this path here. We’ve already got a lot of feedback and suggested changes. So thanks to everybody who tried to make the spec better. Special thanks to AKI who put in a lot of work to properly generate the PDF and include all of the feedback. + +NRO: The next few steps are as SHN mentioned before, next week during the Ecma GA, there will be a vote and hopefully we will finally get approval for our new standard. There are a few steps on our side that we still need to do. One is that we actually now need to rename the URL to the new spec number, that is also wrong in this slide, and we still need to finish a few changes from the PDF to the web snapshot. But the PDF is the reference yearly snapshot. + +NRO: There have been a few spec changes. The only relevant one is that we have this warning that we discussed in the last plenary about the different ways to find the source map comment potentially giving different results. This is included also in the final version that we will publish. We are working towards a solution. We don’t have the exact solution yet. + +NRO: There have been some progress with proposals. Scopes, which is our most active proposal, it’s Stage 3. We have multiple ongoing experimental implementations and we’re now thanks to the implementation testing how to best encode this data to minimize the source map size. And there has been also some progress on another four proposals which was promoted to Stage 2. Debug IDs allows to give the identifier the file because in many cases URL is not enough. Redeploy the application and file might change, so this proposal gives all file some. It’s at Stage 2 right now. Before advancing we likely need to discuss it with somebody that can specify normative APIs. It could be WHATWG, it might be ourselves with the build on top of the respective proposal. But we will have this discussion when time comes. + +## TG5 (Experiments in Programming Language Standardization) Updates + +Presenter: Mikhail Barash (MBH) + +- (no proposal) +- [slides](https://docs.google.com/presentation/d/1DJUuR4Bnoe3VgV-rc2jWIXqvyMv3krwqi9J9yZbxwDw/edit?usp=sharing) + +MBH: Short update on TG5. So we continue to have regular monthly meetings. Some of the recent topics that we had was a tool for previewing how syntactic or API proposals manifest in existing code bases. Essentially it was a project to implement a structural search and replace, and then this enables previewing some of the syntactic proposals and most of the API proposals in existing code. And there is also a study being conducted in the University of California-San Diego on MessageFormat. With in person TC39 meetings we are arranging TG5 workshops where we have sort of a small update workshop with the local university with the group that works on programming languages. So currently we are planning the TG5 workshop in Seattle on Friday the 21st of February. This is not yet confirmed. And this is in discussion with the research group on programming languages and software engineering at the University of Washington. And I will come back with more updates in Reflector when we have a confirmation. + +MBH: And I would also like to mention that we have a list of open issues for TG5 and you are welcome to say what you would like TG5 to conduct. That’s it. I’m ready for the queue. + +RPR: I will say the most recent TG5 workshop in Tokyo was a lot of fun. And very high quality. So looking forward to the next one in Seattle. + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +- (no proposal) +- (no slides presented) + +RPR: CDA, do we have things to say about—from the code of conduct committee? + +CDA: A little bit. I mean, pretty quiet for the most part. I think the only thing we got was we got sort of a report, not really a report, more of an email. Not from anybody within the committee itself. It was just a couple of outside folks, outside from the committee, got into a little bit of a tiff and discussion in one of the GitHub repos. Really wasn’t a severely—we have seen much worse in the GitHub repos but apparently struck a cord with this individual. But it fizzled out. We reminded folks to be mindful of the code of conduct. Haven’t heard anything since. That’s really the only thing. Other than that, as always, standing invitation to the code of conduct committee and if interested, reach out to us. + +## Call for reviewers - ESM Phase Imports + +Presenter: Guy Bedford (GB) + +- [proposal](https://github.com/tc39/proposal-esm-phase-imports) +- [slides](https://docs.google.com/presentation/d/1qfnmqPkpuAqTv-1pll1Y6EkEHElf_58BtNBQSw9dpq8/edit#slide=id.g305421a9f36_0_11) + +GB: So this is just a quick update on the ESM phase import proposal. So one of the things when we got Stage 2 earlier this year was, we did not identify our Stage 2.7 reviewers at that time. So this is just a call out for the fact that we are seeking to just confirm those reviewers. I’ve reached out to everyone who I think should have been interested in being a reviewer and have put those who have confirmed interest down. But this is a formal shout out in case we have missed anyone. So if we have missed anyone, we are seeking Stage 2.7 in two days’ time. So ideally if you’re able to review by then, but of course if someone would like to review, they can. And, yeah, so now is the time to speak if anyone else would be interested. + +RPR: Or if there’s any concerns that this is insufficient review, please do say. + +KKL: I volunteer as tribute. + +GB: And would you be able to complete your review in time for Stage 2.7 request on Wednesday, or are you requesting that we delay our 2.7 request for the next meeting in February? + +KKL: I will do what I can. + +GB: Okay. Thank you. I will add you to the list of reviewers. Much appreciated. + +RPR: I think we can conclude we have agreed the reviewers for ESM phase imports. + +## Process document fixes and corrections + +Presenter: Chris de Almeida (CDA) + +- (no proposal) +- presenting [tc39/process-document#46](https://github.com/tc39/process-document/pull/46) + +CDA: My intention wasn’t to actually pain painstakingly go through all of the changes but just to talk briefly about them and to ask consensus on making them with the caveat of providing, I don’t know, an additional week or something for folks to review offline. Basically there are two PRs. One is a correction. So first of all to be really clear, there is no process change here being proposed. These are just fixes and clarifications in two PRs. The first one is there were things we forgot to update when we introduced the new Stage 2.7. So this doesn’t change reality. This just makes the process document actually reflect the reality that we already have. So that substantive change is isolated here in this PR, which I think already has a couple of approvals. So this is just clarifying the text about reviewing for Stage 2 and Stage 2.7. + +- presenting [tc39/process-document#48](https://github.com/tc39/process-document/pull/48) + +CDA: The second PR– as I was making this change here, I was going through the rest of the document and felt like it could use a little bit of clean up as well. So there’s a second PR with a little bit more of content change here. Again, no substance has been changed. There are grammatical corrections, fixing of awkward phrasing in places, consistency with capitalization, things of that nature. So removing of scare quotes, fixing of the Ecma spelling, and still a reference to the Ecma CC in here which is no longer a thing, at least not by that name. So, again, no real significant substantive changes here. Certainly nothing that changes process. But just really cleaning things up. I think we have a couple of reviews on this as well or some feedback that we received. + +CDA: So this is I suppose a call for consensus to make these changes, as well as a call for anybody to get more eyes on it and maybe we could say if by the end of this week or perhaps the end of next week, it might be better since this week is plenary with no objections and approvals by then to merge these changes. + +NRO: Your changes look good. Just as a follow-up, the current process documents says that when we find the Stage 2 reviewers, we should already know roughly when we’re planning to go for 2.7. In practice what happens is that, well, we don’t know yet, and at some point the champions with the reviewers say I plan to go 2.7 next meeting, please review. So maybe we should just reword this to better reflect what we actually do. + +CDA: To be clear, you’re referring to this line here at 185 when reviews are designated, a target meeting for 2.7 should be identified? + +NRO: Yeah. + +CDA: Yeah, I think that would be a good idea for a follow-up PR. It does say “should be identified”, not “must have identified”. I agree. If this differs from what we typically do, then I agree that we should update it to match reality as well. + +NRO: I can make a pull request and ask you to merge these changes. + +CDA: Sure. That’s great feedback, thank you. + +RPR: MF is agreeing with you that it’s best done as a follow-up. + +CDA: Okay. Concretely requesting consensus to merge these two PRs at the end of next week at the earliest provided we have approvals and no blocking concerns via the PR review. + +DLM: We support that. + +RPR: No objections. So I think we have consensus on this review for merge at the end of next week subject to no review comments.. + +## More Currency Display Choices + +Presenter: Eemeli Aro (EAO) + +- [proposal](https://github.com/eemeli/proposal-intl-currency-display-choices) + +EAO: This is a very small proposal. We had a short discussion in TG2 in fact about whether this should be a normative PR instead. But we thought, because there’s a little bit of discussion here that it would be good to have a little bit of space for that and the staging process is a very fine place for that. So the short entirety of this is that we do currency formatting under `Intl.NumberFormat` by using the `style: 'currency'` option and furthermore when formatting currency we have a `currencyDisplay` option that is effectively an enum value that we accept how to format the currency symbol. If you use the default `symbol` you get “$” or “US$” for formatting USD and `narrowSymbol` formats to "$" and `code` that gives you an ISO currency code like USD, or then as a spelled out `name`. All of these are of course localized name such as “U.S. dollars”. + +EAO: And specifically here one thing to note is that for the `'symbol'` choice, not the `'narrowSymbol'` but just the `'symbol'`, whether or not you end up with something like a “$” sign just by itself or “US$” depends on both the currency and the locale. In US English, you get “$” for USD and “CA$” for CAD. And similarly, in Canadian English, you get “$” for CAD and “US$” for USD. + +EAO: And now, the proposal itself is about extending the scope of things. That’s to solve two different use cases. First of all, there are times, such as when you are formatting values in different currencies and you would like to use a relatively narrow symbol view of the currency, it would be really useful to be able, even in an en-US context to say, “I would like to have ‘US$’ for USD,” similarly to what you would get for effectively all the other currencies in the world. With the options right now, there’s no way to getting “US$” in the en-US locale. This is not just an en-US problem. Similarly with many locales and currencies across the world, where there is a local way of expressing and implicitly understanding that it’s our dollars and so we don’t need to specify like a US or other units like this. + +EAO: Then a separate case is that when we are doing currency formatting, there are aspects of this that need to take into account the currency, and based on the currency, change some parts of the formatting, specifically, most importantly, the number of fractional digits that is displayed. And there, it becomes in some cases interesting to do currency formatting even if you are not actually displaying any currency symbol there at all. And to effect that, it’s really useful to be able to format currency, but not show anything, any currency indicator at all while doing so. And this is currently not possible, effectively. So these are the two issues that we are looking to try and fix here. + +EAO: The proposed solution here is to add the following two currency display option values: `'formalSymbol'`, it always chooses a sort of longer form like “US$”, for instance. In the discussions in TG2 on this one, the specific aspect of the whole proposal that I think there’s a little bit further discussion is whether this thing ought to be called `'formalSymbol'` or possibly `'wideSymbol'`. And introduce a second additional possible `'never'` value to the option and that would not display any currency symbol or name. So the code here effectively shows how these would work, where the first one is showing the `'formalSymbol'` currency display option, and the second one is showing the use of the `'never'` currency display option. The word “never”, by the way, in this context, I picked it because we have kind of near this in the same space, we have the option `signDisplay` for whether to display the positive or negative sign before, and it has a “never” possible value for that. + +EAO: Some of the relevant background here is that ICU has already support for something like “formal” and something like “never”, which that’s where the “formal” name as opposed to “wide” name comes from. + +EAO: That’s pretty much the entirety of the thing. I’ve also put together the very, very small spec change that would be required for all of this into 402, and that’s adding the `'formalSymbol'` and `'never'` as appropriate to the few places where the currency display values are iterated and the very brief description thereof that can include in the spec. And based on this, I am asking for—well, if it were acceptable, Stage 2, but I would also be happy with a Stage 1 in order to discuss this and advance the formatting of this to effectively bikeshed to be called a `'wideSymbol'` for the options values. And that’s effectively all I have got on this. If there’s any queue, I am happy to address any issues or questions. + +RPR: At the moment, there is no one on the queue. It’s hard to tell, isn’t it? For something coming in Stage 1 or Stage 2. + +EAO: If nothing is going to show up to the queue, I would like to ask for Stage 2 for this proposal. + +DLM: We discussed internally and this is definitely a proposal that we support. I am not sure as a fellow Mozillian I should be the only person supported for Stage 2, but it definitely has team support for Stage 1. + +RPR: Okay. So are you stating personal support for Stage 2? + +DLM: I am stating—yeah. I guess I really… I am not sure what I meant by that. But, yes, definitely support for Stage 1 and I will second someone else who says they support it for Stage 2 + +JHD: So I apologize if you said this, and I missed it, does this mean you believe there is no further design space here? And that’s why it’s ready for Stage 2 because this is basically done? + +EAO: Basically, yes. In that we already have this currency display option. It is already controlling how symbol formatting happens. And I am asking to extend -- well, not extend it because we are already doing formal symbol style formatting, we just don’t allow it explicitly in some cases and never is kind an option of not symbol at all. I don’t see any other possible solution for these use cases, other than adding two different view currency display option values. Specifically, I think the discussion about whether `'formalSymbol'` or `'wideSymbol'` might be the best name or whether there is something better than `'never'` as a name for the other one are possibly something that could be discussed within Stage 2, if the options I am proposing here initially are not to everyone’s satisfaction. + +JHD: All right. Yeah. To be clear, I wasn’t suggesting that there is further design work that could be done in Stage 2. More, my sense of the proposal is that there’s nothing further to be designed and I wanted to confirm that we shared that sense. + +EAO: Okay. + +JHD: Yeah. I support Stage 2, so I have not fully reviewed the spec. + +DE: I support this proposal as well. I also haven’t reviewed the spec. But a small feature like this, that adds on to an existing capability is exactly the kind of thing that I would hope to come from TG2, that I look forward to, especially given the concrete motivation. I would be okay with proposals like this going by either the stage process or a PR. And I want to emphasize what JHD just said, Stage 2 still permits a lot of further design work. We often go to Stage 2, with significant open questions, so I guess in this case we don’t have any open questions either. + +RPR: The queue is now empty. So I think we’ve heard qualified, caveated support for Stage 2 in the sense of JHD, but without reading the spec; and DLM from a personal point of view. So EAO, I think it’s your choice, what you want to ask. + +EAO: I think I would like to ask for Stage 2 because I think there is sufficient support for that. If there are concerns that arise, I believe that those concerns would fit in well to the work that this proposal will undergo, under Stage 2. + +RPR: Okay. DLM has upgraded to unqualified support for Stage 2. DLM, did you want to say anything more? + +DLM: No. I think the open questions here are resolvable in Stage 2. I didn’t want to be the only invoice in support for Stage 2, given that Eemeli and I work for the same organization. + +RPR: We also now have DE with +1 for stage 2. So there is definitive support from multiple orgs. All right. Any objections to Stage 2? + +RPR: No objections. We have heard support. Congratulations, Eemeli, you have Stage 2! + +EAO: Excellent. Thank you. Am I supposed to ask for reviewers for Stage 2.7 at this time? + +RPR: Now is the time + +EAO: I would like to ask for reviewers for Stage 2.7 for this very, very small change. + +JHD: I am happy to review. + +RPR: Thank you, JHD. Any chance we could get one more reviewer for this proposal? Okay. We only got one at the moment. + +NRO: I can review. I have now—very new experience with Intl, but this seems small enough that I can do it. Nicolo, for the notes + +RPR: Thank you, NRO. Should we also be setting a target meeting for the 2.7? You brought it up. I am not trying to coerce. Coercion is bad. Okay. All right. EAO, would you like to, perhaps, read out a summary for the notes? Or would you like to write a summary + +EAO: I am happy to state that the proposal received support for advancement to Stage 2. I don’t think that there’s more. Was there? I mean, other than—the proposal was presented and it was accepted. + +AKI: And there are two committed reviewers for Stage 2.7. + +### Speaker's Summary of Key Points + +The proposal was presented and it was accepted. + +### Conclusion + +“More Currency Display Choices” was accepted for Stage 2, with JHD and NRO as committed spec reviews. + +## Upsert (formerly Map.emplace) Update and request for Stage 2 reviewers + +Presenter: Dan Minor (DLM) + +- [proposal](https://github.com/tc39/proposal-upsert) +- [slides](https://docs.google.com/presentation/d/15sWTvdWIo9Jt12LFRNBPJo1N_8xsMSCB3jy73HBFX-M/) + +RPR: So we have DLM with upsert, normally map.emplace. An up date and also a request to Stage 2 reviewsers + +DLM: This was the original name five years ago. And we have gone back to that. MF pointed out, we should not name proposals after solutions, but rather problems. We finally agreed on “upsert”. I should start with the motivation. + +DLM: So this is the thing that we were trying to make easier for JavaScript developers. You have a map. And you want to do something different, depending whether or not the key is present in that map. Proposed solution. This changed slightly when I presented this in October. Two methods. One is a `getOrInsert`. This one, search for the key and the map. If found, returns the value associated with that key. Otherwise, it inserts a value in the map, the default value in this case and return that. + +DLM: I also have a `getOrInsertComputed`. This is very similar to the above. September in this case, you are going to call a callback function that returns a default value which then is inserted. When I presented this in October, it was `getOrInsert`, but there was feedback from the community at that time. If it takes a lot of work to calculate a default value, it would be nice to defer that to a callback function and so to do that up front. Works since the last time I presented this. Yeah, as discussed, the name changes back to upsert. Two methods, one using the value directly and other with a callback. Upgrade specification tasks 6789 Michael has done a great job for fixes and suggestions. Students also have prototype versions of the design. SpiderMonkey and V8. This work at the moment exists in their local repository. + +DLM: Two open issues that I was hoping to get feedback on. The first one, this is an issue that dates back to when this proposal first came into committee. This was about locking the map with concurrent access. That’s no longer what we are discussing. But there remains a problem with the callback version. Where the person who modifies the map, in the callback, rather than using a callback to return a default value, MF has helpfully put together two pull requests with two proposal solutions to the problem. One checks to see if the map has been changed by the callback function. So checking for the existence of the key that previously was not there. And now it does exist. And throw an error in this case. The other proposed solution would be to check for the existence of that key after the callback and return that value. So the problem that we are trying to prevent is people mistakenly using the API to insert values during the callback function, rather than returning the value to be inserted. As I state that, there is a third thing to have at API, to use that to callback to insert values into the map. But basically, the API design is that you should return a default value, so it’s a user mistake or developer mistake if they use that callback to insert the value and we should probably—at least in my opinion, I lean slightly towards throwing because this is a mistake using the API. The other option would be to accept the developer’s intention and insert during the callback. + +KG: Just a clarification on the second option here. The non-throwing option here. There’s two possible values that you could end up with in the map and returned. There is whatever happened during the callback. And then there’s whatever the callback returned. + +DLM: Yes + +KG: I thought that the proposed solution, and certainly my preference for the behavior, is to use the value from the callback. Like, that the callback returns. Not that it happened during the callback. Because the return value is sort of the second thing. There’s a mutation and then there’s the value that is returned. And I would not be excited about using whatever mutation happened during the callback. But I am fine with the approach of using the returned value from the callback. That said, I am also personally leaning towards throw. So maybe this isn’t even relevant. + +DLM: Yeah. I possibly misread the PR that MF put together. But I am sure the second offence was to return the value from the mutation during a callback and not the return value from the callback. And I probably quickly bring it up. Did you have your hand up? + +KM: Doesn’t this situation basically kind of—I thought the main reason for `getOrInsert`—maybe I am misremembering—was like it was more performant than looking up twice. Doesn’t it default the whole optimization. You have to look up where the key goes anywhere? + +DLM: Yes. Mm-hmm. So in this case, this is the computed version. So calling the callbacks. We assume people only use this when there’s a lot of work to be done on that callback function anyway. The usual optimization from not having to re-lookup the value wouldn’t apply in this case. There’s the API where it takes the default. In that case, we wouldn’t have to relook up anything. + +KM: I see. I guess I am just worried that there will be confusion between the two in that sense. People will expect not to have to look up and would be surprised there’s a huge difference, even if you inline the computed thing. But maybe there wouldn’t be. I don’t know. Okay. All right. That’s fine. + +KG: Sorry. When—KG, when talking about the performance costs, do you mean the performance cost of having the spec check whether someone inserted the value again or the performance cost of someone doing the insert in the callback? + +KM: The—I mean, just semantically, like the fact that your hash table could change underneath during the callback would require you to do another hash table callback, the element. Whether that’s in like—I think it’s more like no matter what you do. It assumes you allow any mutation to the thing, that you have to do a hash. You can't assume your status is the same. On the other hand, if we throw, on the other side, now every map operation has to check “am I under a callback?” hook, which is also not great because all of the other normal existing operations gets slower, they have to do a check. + +KG: Yes, especially if you try to polyfill it. + +Right. Yeah. Either way, it’s roughly the same. Because the code we are going to generate is not, I think. But there’s—it’s definitely going to hurt the perf. + +KM; I guess in terms of Perf, these are long, expensive things, just a final comment, the second one is probably better because the cost is localized as to get or insert computed rather than every operation. If you have to throw, it means every operation needs to have some, like, check for. Under a get insert computed, whereas if you just have to rehash when you return, then the cost is only borne by `getOrInsertComputed` and not every other operation. + +DLM: I am not quite following. Because I thought we only throw inside `getOrInsertComputed`. We weren’t talking about throwing from the actual set – + +KM: How do you know the map is mutated? + +DLM: We would reach out—the existence of the key. That’s the only case we are going to throw is where the key—and both of these options, the idea was to check for the existence of the key after the callback completes. And taking that existence of the key as evidence that someone has mutated the map and we need to do something. + +KM: I think in that case, I am indifferent to the choice. Yeah. Sorry. I thought were you throwing on the underlying sets inside the callback [snoo*ek] no. + +DLM: No. That is it was proposing or doing a double locking. But that’s not the solution that we have come up with since that issue was originally filed. + +SYG: I think this has been clarified by what MF said in the queue item and what DLM said. So in the non-throwing case, the semantics of the non-throwing alternative: initially you check for the existence of the key, if it’s not existent, you run the callback. And after you run the callback and get the value, you then check again for the existence of the key, and even if it still exists, you then set the key with the new computed value returned by the callback. Is that correct? And then return that computed value. + +DLM: Yes. That’s my understanding, and MF commented that that is correct. + +SYG: Okay. Cool. Then I think it is also my preference that along similar lines as what Keith was saying, I want basically most features to be pay as you go and that would be the—no throwing thing is clear that is pay as you go for use of this particular method. And because the way you decided to check whether the map is mutated by the existence of the key, you could certainly, you know, delete stuff and then re-add the key. And that would result in a pretty different hash map at the end, even though from that method's point of view is not “mutated” and I find this misleading. Instead of like—unless we build an actual mutation check which would have the non-pay as you go problem, as Keith pointed out, I would not build a particular notion of mutation that is different from a normal understanding of what it means for a map to be mutated. + +DLM: Okay. So that’s support for rechecking in that case. Right? + +SYG: Yes. + +MF: ACE is unable to be here, but asked me to relay Bloomberg’s opinion, that they prefer the non-throwing PR because the throwing version catches only one specific case of mutation. + +DLM: I am convinced, I think we should go with the non-throwing version. Can I move on to the other issue that I wanted feedback on? + +RBN: We don’t throw during iteration, so I don’t think it makes sense to throw here. I also am not certain we need to have such complicated locking behavior as was initially proposed, which has been kind of put aside. But I don’t think that blocking behavior is necessary for something like map because we don’t employ that elsewhere in the array types for any mutationion of that sort. I also wonder, so this is—the second option was returning the existing key’s values, regardless of what happens in the callback. Is that the case? + +DLM: I think it’s actually—we return—yeah. I wish I had the PR open. Sorry about that. + +KG: The behavior in the PR you use is the value from the callback, the returned value from the callback. And it clobbers any mutations that happened to have happened during execution of the callback. + +RBN: Then, yeah. That is the behavior I think I would personally prefer + +DLM: Okay. + +RBN: I agree with that behavior. + +DLM: Thank you, Kevin. + +DLM: In that case, great. It sounds like we have that there. + +DLM: The other issue, I wanted to open and we talked about this last meeting as well, is the name, there’s a few comments in issue number 60. I am still open to other suggestions for names. I don’t think we have come to anything much better than this. But… and also, very welcome to—some of the suggestions from last time around got lost in the notes and weren’t captured + +RPR: SYG is asking a question + +SYG: I am confused. You started this presentation with this has been renamed to upsert. Is that the proposal name? + +DLM: The proposal has renamed from “map in place” to “upsert”. So it didn’t refer to a name we weren’t planning to use anyone + +SYG: I see. Okay. + +DLM: Okay. I will not waste time on the bikeshedding thing. I suspect—I will move to my next slide. Two open questions. Thanks for the comments in #40 and #60. We can resolve. The other thing was any volunteers for Stage 2 reviewers? + +JMN: I am happy to do so this. It’s JMN from Igalia + +DLM: Thank you,JMN. + +RPR: I feel like your photo is calling out for MF to be a reviewer? + +DLM: That was completely unintentional on my part. + +DLM: I am going to use that photo in every presentation. We recently did you believe + +RPR: MF has volunteered. + +DLM: Okay. Great. Thank you. + +DLM: I think I need two people. So that’s perfect. Thank you. If anyone else is interested… please let me know. + +### Speaker's Summary of Key Points + +- Presented update on work that has occurred since October 2024 plenary, including renamed to proposal-upsert, support for both `getOrInsert` and `getOrInsertComputed`. +- Asked for feedback on handling of modification to the map during `getOrInsertComputed` callback, and on method names. +- Asked for Stage 2 reviewers. + +### Conclusion + +- Committee was in favour of the non-throwing solution to issue #40 (https://github.com/tc39/proposal-upsert/pull/71) +- No further feedback on naming of methods, we’ll resolve this in the issue itself. (https://github.com/tc39/proposal-upsert/issues/60) +- JMN and MF volunteered as Stage 2 reviewers + +## `Intl.DurationFormat` for Stage 4 + +Presenter: Ujjwal Sharma (USA) + +- [proposal](https://github.com/tc39/ecma402/pull/943) +- [slides](https://docs.google.com/presentation/d/1bAuZ0ZSSYUdJxiDYXz2tUWHZwaOmYkNoLpQBBy_qz1w/edit?usp=sharing) + +USA: Hi, everyone. Before I start with the actual presentation, thanks to BAN for doing basically everything. He couldn’t be around, so I am going to be presenting this instead. But as one of the champions of the proposal, I can say that it’s been amazing the recent amount of work that has gone in, and yeah, it looks like we are finally at the finish line. So let’s see. + +USA: A quick overview of DurationFormat for, initialized. It is a formatter, sort of in the same class of low-level built- in formatters, like other existing Intl formatters and you use them and they are specialized. They take one certain kind of input, and they format it according to the locale provided to them and other cultural hints like calendars and so on. A duration in this case is certainly defined as any time duration. So you know, it could be expressed in multiple units, it could be composite duration in that sense, or it could be expressed in a single unit. You can see, different locales format them differently. This might not be the best example, since it seems very similar. But from prior experience, you might know that different locales handle certain details of durations differently. So this is one of the driving use cases of this proposal. + +USA: One thing to note is, one of the most important ways to customize the result of this formatting or generally to change how it looks is through width. And width essentially implies the amount of space, and in this case, screen space you want to dedicate to a duration. So as you can see, in en-US, in long style, you would have something very fleshed out. So you will say something like “one year, three days, and 30 minutes”. In narrow, that becomes much shorter. So it becomes a thing like “1y, 3d”. It is replaced with these alphabets to signify which unit it is. And then there’s the digital style. So digital style is interesting. It’s not well-defined for every single unit. However, it has a very special case for things such as hours, minutes and seconds and imitates a digital clock. One important thing here is that, while it’s possible to use a single consistent width for the entire duration, there are viable use cases that require you to mix and match the different widths for different units in order to get the point across. + +USA: So to summarize, this proposal allows for duration formatting based on locales and flexibility for using different formatting sometimes for different units. So you can basically have one different sort of width per unit. Use case for this is skyscanner. So as you can see and probably relate, all websites that deal with air travel are full of durations. There’s a handful of them all over this place. And, yeah. Anything from a simple timer on an application, maybe a to-do list application, or something like the duration of some trip can be a duration. Right? So, yeah. Here is how it looks on skyscanner. + +USA: One thing to note, this is already using different width or style, as you will, however you like to call it, for different units. In this case, seconds, for instance—well, I don’t know if there is seconds data for this stuff. But it is, it’s never displayed because you don’t want to display that. Minutes, on the other hand, are displayed numerically, so this means without any unit. This is mostly because it’s implied that the lowest unit would be minutes in this duration. And for hours, it's narrow. So it’s using “2h” because that’s the shortest way to signify an hour. + +USA: These are a few usage examples. I wouldn’t go into detail. As you can see, there are many different ways to use the API. We have been over this many times. But, yeah. Feel free to ask any questions about the API. And here we go. So, you know, different styles. And mixing different locales and stuff. And as you can see, you can provide an alternative numbering system and that would just work. + +USA: So, yeah. Going over the history of stage advancement, the proposal advanced to Stage 1 in February 2020, 2 on June 22. Relatively quickly. October ‘21, we got to Stage 3. That was before there was a Stage 2.7. This has, as you might have noticed, significantly longer as a stage because of a lot of implementer feedback. And the fact that we were slightly doing things in a different order, this time around, with developing our API and then going back to, you know, making sure that it works in different tools. + +USA: Plans for V2. There are a few, but to name the popular ones, maybe a format range. So you could have a range of duration. This could be useful for things where you don’t need an exact duration. For example, recipes might have—maybe not baking, but I know for sure, cooking, you can have a range of a duration. Fractional component abouts of hours and minutes so that you could do things such as 1.5 hours or 0.1 minutes. But yeah. These should be done in a way that, you know, we can control well and ergonomically. + +USA: The most significant part is the Stage 4 requirements. As you just saw, the proposal has been at Stage 3 for a while. In this time, we have not only polished the proposal significantly, but we have shipped Test262 tests, and we have two compatible implementations that have passed this test. We have a lot of experience from the implementers, as well as all the feedback that has been addressed. I would like to really thank all the implementers, everyone involved, in the implementation of DurationFormat, namely YSZ, ABL and FYT, all the feedback has been really important for the development of that proposal. We have also had NPM documentation and a pull request made against ECMA-402, which was approved by TG2. So as you can see, the last step that we have to go through is committee approval. So I would like to formally request stage advancement for DurationFormat. + +RPR: You have support from DLM. And incoming support from PFC. + +PFC: Yeah. I mean, I am also from the same organization, but yeah. With my Temporal hat on, I am very excited about this becoming a way to format the Temporal object. Which we will incorporate after the proposal reaches Stage 4. + +USA: That was indeed an important use case when this started. And, yeah. I am glad that both proposals have matured well. Yeah, looking forward to that. Thank you, PFC. + +USA: Thanks, everyone, for Stage 4. + +### Speaker's Summary of Key Points + +- USA went over some details about the purpose and history of the proposal. +- Stage 4 was requested and there were no objections to stage advancement. + +### Conclusion + +- DurationFormat reached Stage 4 with supporting comments from DLM and PFC. + +## `Error.isError` to stage 3 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-is-error/) +- [slides](https://github.com/tc39/proposal-is-error/issues/7) + +JHD: Error.isError. It was not too long ago that we advanced it to Stage 2.7. We have Test262 tests written and merged. And it would be wonderful to see this proposal advance to Stage 3, at which point the HTML integration PR, which has already been directionally approved, would then be able to merge as well, unblocking the further advancement of this proposal. So I would like to request Stage 3. + +DLM: We support and we actually have a implementation ready to go once it reaches Stage 3 + +JHD: Love it. Thank you. + +NRO: So this is just like I didn’t see any update. I wonder if Mozilla has anything. I opened an issue about `InternalError`, which is an error that Firefox throws in some cases. Given that DLM—I wonder if the internal error has been properly handled. + +JHD: That would be good to get Mozilla confirmation + +DLM: I am not sure. It was done as an open source contributor. + +JHD: For what it's worth, the internal error is already currently indistinguishable from a true subclass of error. So depending on how that was implemented, it might work by default, but I assume the change for DOMExceptions could be made for internal errors. I will keep an eye on that and NRO, I will make sure as it gets published in any channel of Firefox, and keep an eye open until it’s implemented. + +RPR: So, yeah. We had one note of support. No objections. Last call, any objections to Stage 3? No objections. So congratulations! You have Stage 3. + +JHD: Thank you. + +### Speaker's Summary of Key Points + +- test262 tests merged +- Firefox’s `InternalError` should pass this predicate, and champion will monitor implementation status + +### Conclusion + +- Consensus for stage 3 + +## Iterator helpers close receiver on argument validation failure + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/ecma262/pull/3467) + +KG: Hello, all. This is a follow up to iterator helpers, which just landed in the spec. It is a normative change to something that is already shipping, but I strongly suspect it’s web compatible, especially given how new iterator helpers are, and I would like to make the change if we can. It's an oversight from specifying it. + +KG: So the background here is that iterators are closeable. They have a `.return` method. All generators have this, and user defined iterators may or may not have this. For generators, it would trigger the `finally` block if you are yielding within a try-finally. + +KG: And because this can do important cleanup work, the general rule is that once you get an iterator, it’s your responsibility to close it, unless it throws an error or violates the iterator protocol, or any of these other things. But if it yields a value you weren’t expecting or you got some other value that you didn’t know how to handle from somewhere else, then you need to close that iterator. Generally, we are disciplined about that. But we failed to do that specifically for the case of argument validation for the iterator helper methods. They do not close their receiver, the `this` value. They do in other cases. I have here the specification for `Iterator.prototype.filter`, and you can see down here, if calling the predicate throws, then we close the underlying iterator. But we don’t close the underlying iterator if the predicate is not callable. And I am pretty sure this is just a mistake. So there’s a few different places where we do this kind of argument validation. `filter` requires a callable predicate. `map` requires that also. `take` and `drop` require a number argument, not NaN. + +KG: And so what this pull request is doing is going through and each of the places that one of the iterator helpers takes an argument which gets validated, and if the argument fails validation, then we close the underlying iterator. So we maintain the contract that once you have been handed an iterator, you are responsible for closing it, where "you" is the prototype methods on the iterator helpers. + +KG: So this is just a "needs consensus" PR, because it’s a small tweak to the existing spec. I haven’t written tests because this was a last minute thing but I will do so, as soon as this is approved, if we approve it conditional on tests. Yeah. + +MF: I strongly support this. This was totally just an oversight. We didn’t think of the `this` value as a parameter here. Like a regular parameter, we should handle closing it because it is passed in. + +RPR: DLM supports this. + +KG: Okay. Well, hearing no objection, and having two notes of explicit support, I will take that for consensus. I won’t merge this until I get tests up. But take it as having consensus. + +### Speaker's Summary of Key Points + +- An oversight in iterator helpers meant that we did not close the receiver when an argument failed validation. This PR will correct that. It's almost certainly web-compat given how new iterator helpers are. + +### Conclusion + +Approved. + +## AsyncContext request for Stage 2.7 reviewers + +Presenter: Andreu Botella (ABO) + +- [proposal](https://github.com/tc39/proposal-async-context) +- [slides](https://docs.google.com/presentation/d/14DxgoHhTL7tzJpcu94y70USeXT9jlkF2k6lJDI720Kc/) + +ABO: Yeah. So we have just two points of updates from the web integration that we shared in Tokyo. So after hearing feedback from multiple parties, about how the proposal that we had didn’t really fit many use cases, we changed the context of events. Like, in which context event listeners are run. So the callbacks have run in the context that triggered the event, the dispatch context. If there is no dispatch context, such as a user click, or in Node.js something like process signals, then it falls back to the root context. This is usually the empty context, where every `AsyncContext.Variable` is mapped back to the default. We want to configure this fallback, and that also covers the other use case that in the initial proposal was being covered by having the registration context. + +ABO: So we propose having this web API, `EventTarget.captureFallbackContext`. The name, and being part of `EventTarget`, are still up for bikeshedding. So this creates a scope, and anything inside that scope — if there is an event with no dispatch context, such as a user click, it will use the context that was active when captureFallbackContext was called. This is useful for things like code regions that you want to keep isolated and where an event that has no dispatch context would lose the context for anything that spawns from that callback. + +ABO: We have a PR and, the next steps are just, we will continue and finish the discussion with HTML editors about the implicit context propagation. We will finish getting the PR for the web specs. And the next time we present, we’re expecting to ask for Stage 2.7. So at this time, we’re asking for reviewers. + +RPR: Any volunteers to review `AsyncContext`? + +JSL: I can. + +RPR: Thank you, JSL. + +RPR: Can we get one more reviewer, please. + +???: MM is not here right now, but he requests to volunteer as a reviewer. + +RPR: Request granted. Yes. Thank you. + +### Conclusion + +- JSL & MM will review `AsyncContext` + +## The importance of supporting materials + +Presenter: Dan Minor (DLM) + +- [slides](https://docs.google.com/presentation/d/1teo8pAE4lbFTIlPZxum2MBcNZfGdUM2Y8huEiVdvQiQ/) + +DLM: I just wanted to talk briefly about the importance of supporting materials. So gentle reminder, supporting materials are already part of our process. We want to see proposals advance to Stage 2, 2.7, 3 or 4 to be available ahead of the deadline along with the supporting materials or delegates can withhold the consensus solely on the basis of missing the deadline. I want to talk about why this is important. Not trying to be pedantic and bureaucratic. SpiderMonkey team do the best to review every proposal as fairly as we can and provide action feedback. As implementers we can’t look at what is interested for us, we have to look at everything. It requires a lot of work on our part. It is not just us. I’m aware that other groups do this. Why supporting materials? Without them, ultimately we’re left guessing what is actually going to be presented. That means we can’t get the right feedback in advance of plenary. I’m not an expert in every JavaScript and have to reach out to other people on the SpiderMonkey team to get feedback depending on the proposal and often we have to reach out to the DOM team or others as well. And people are busy. We need time ahead of time to get the feedback that we’re looking for. I mentioned this in the past every once in a while and got feedback. Why don’t we just attend individual meetings for proposals to keep on top of them or reach out to the champions ahead of the plenary to ask clarifying questions? These are things we do. Not enough time to do this for every proposal out there. It is helpful to us doing proposals. Definitely a clearly written motivation, clear and concise solution for stages where applicable. Supporting use cases and for example code and prior art. These are often helpful to the system implementers and sometimes things that are obviously improvements to people writing JavaScript every day aren’t as obvious to us as implementers and any links to issues and PRs and discussions help inform the opinion. And the other thing is posting the slides as early as possible. That leads back to us having the time to reach out and get the appropriate people involved. + +DLM: So briefly in the future, I’m not asking for any process changes now. I wanted to raise the possibilities. But one thing that I think would be quite helpful for us and other groups are doing these types of proposal reviews is require supporting materials to be available prior to adding the item to agenda. I don’t think they need to be finalized or anything like that. Anything available to give us an idea what the topic will be would be much appreciated. I would like to point out this isn’t actually asking people to do any extra work. It’s the same amount of work. We’re moving around when it has to be done. I don’t want to say people procrastinate. I do have that tendency. I think this might help people to cut down on that. But it would actually help us with more time for review and I think that would lead to better discussions and feedback during plenary. The other item and I think this is something that Yulia may have brought up at the last plenary, if we move the deadline a little bit further back, that would also be helpful. The existing ten-day deadline basically means that there’s only five working days in-between the deadline and the beginning of plenary. Again, this would just mean doing work a little bit earlier. Not asking anyone to do any extra work. And that is actually it for my presentation. Not sure if anyone has any comments they would like to say. JHD is on the queue. Go ahead. + +JHD: Just the spirit of what you’re hoping we eventually get is great, but the—I’m not sure how—like, when we originally were talking about this many years ago, one of the things that I think was brought up or maybe I’m hallucinating it but it still applies is if we require them to be in advance, than any additional supporting materials within—they come up that manifest within that 10 days or 14 days is something that you can’t add to your presentation. because then that part of the supporting materials is not there. That is an often with late-breaking realizations before plenary. Additionally many proposals like Mansara one earlier today don’t require supporting materials and nothing to provide. If I feel inspired I should be able to make slides a day or two in advance. I’m not requiring them. And such a requirement would ensure that if I do procrastinate, that I just wing it on no materials. And so I’m not sure it would actually—I think there would be some potential proverse in-sensitives and wouldn’t necessarily achieve what you’re hoping for. I agree with the spirit more feedback earlier so everyone has time to provide feedback. + +DLM: I appreciate your comments. I think in terms of like not allowing changes to supporting materials, that wasn’t the intention of what I was saying. I was just hoping to see even some initial slides early on I think would be quite helpful. And the intention is not to be bureaucratic or pedantic about this. We have some topics that are urgent. Wouldn’t expect those that aren’t necessarily urgent requiring people to have some form of supporting materials in advance would definitely be helpful. + +CDA: I took myself off the queue. I think you answered it. I was just saying I thought that you had mentioned it’s okay if the thing isn’t complete. I think you just clarified that as well. And also like JHD, I don’t really view this shift as being any different from the status quo today. Today especially for the advanced stages that supporting materials are required, but there’s also nothing that says oh, if you change the slide at the last minute or added a slide at the last minute that somehow that’s unacceptable for any reason. That’s not my understanding of the current process. + +JHD: I mean, I think if it’s not a meaningful change, then it wouldn’t achieve what Daniel is hoping for. I think if it’s achieving anything, it is definitely making a shift in some way. And so I’m just suggesting that we need to be careful about the unintentional consequences of various sets of requirements. + +CDA: I think there’s only a few minutes left in the topic. Like to hear from Nicolo and SYG if possible. + +NRO: What if I publish slides and then I have new things to add to them. Something I started a a few meetings ago is to mark my slides, if I had something saying this is a couple days ago. I would appreciate everybody doing something like that. It helps understanding did I forget about this slide when I reviewed them or is this actually something new? It’s fine to have late changes. It would be great to mark them somehow. + +DLM: I agree with that. + +NRO: Last meeting in Tokyo particularly bad is with we tried starting establishing an internal deadline for I think one extra week before official deadline where at least internally we must share the slides with other Igalians and we still didn’t do perfectly but recommend other companies to do something similar. + +DLM: Thank you. + +SYG: I agree with the general spirit of this for sure. I support adding as many support materials as you can as early as possible for the same reasons that Dan Minor said. We need to review everything. At the very least, I don’t want people to over index I have to make a full slide deck and don’t want to do that and give us an idea of what changed since last time and why you’re bringing this back. If I don’t know why you’re bringing something back and if I don’t know why something is proposed, and put on the agenda item, that is not—I am not predisposed to that. Just as a matter of fact thing, the more material there is as early as possible, the better your chances are. Even if it turns out that there really isn’t much you can add a quick note. There isn’t much. Add a quick list of bullet points. There is only one material question I really want to go or blah blah blah. Some sort of hint, please. + +DLM: I agree. That would be quite helpful for us as well. And I guess to quickly summarize with held consensus before because I hadn’t had time to reach out to the appropriate experts at man zilla to say if something is okay or not. There is a down side of potentially wasting committee time because something has to be brought back again because there wasn’t enough advance notice to get the right feedback on it. + +CDA: Thanks Daniel. We are at time. Now, I know you only scheduled ten minutes for this. I don’t know if it’s worthwhile to do a continuation like later on if we thought it would be useful to talk about the 14 days specifically or anything like that. + +DLM: I think I would have that for another plenary. I wanted to give a brief presentation and I can make a brief summary in the notes. Thank you for your time. + +### Conclusion + +- Not asking for any process changes at the time, just trying to highlight the importance of supporting materials for people who are evaluating proposals, in particular implementers who spend a lot of time on this. + +## re-using IteratorResult objects in iterator helpers + +Presenter: Michael Ficarra (MF) + +- [PR](https://github.com/tc39/ecma262/pull/3489) +- [slides](https://docs.google.com/presentation/d/1HQzC15dFnQClnUWYHSFx95aMuiJjHAjE186flPW7iZE) + +MF: This is needs-consensus PR #3489 on the ecma262 repo. The goal of which is to reduce the number of temporary objects we create. I just want to give some examples of where this would apply in the iterator helpers we have today. If you look at the present behavior of Iterator.prototype.take. We have an iterator called nums here that yields 0, 1, 2, 3, 4, and each of the squares is an IteratorResult object. The object with the done and the value property. And if we do `nums.take(3)` you can see we yield new IteratorResult objects where we copy the value over and create new objects to do that. And after this pull request, if this pull request was merged, instead of creating new IteratorResult objects through the iterator helper and copying the value over, we reuse the whole IteratorResult object itself. So nums and `nums.take(3)` would each yield the same IteratorResult objects. You can see another example here in `.drop(...)`. So today what it looks like is if we call `nums.drop(2)` yield these four IteratorResult objects, three of which are copying the value over. Instead, we could yield these four IteratorResult objects and three of them can be completely reused. So we don’t create extra objects that provide no value. And lastly, `Iterator.prototype.filter`. This is doing something similar. You can see today it is copying values over into new IteratorResult objects and even though not sequential we could still reuse the IteratorResult objects as we iterate the result of filtering. + +MF: So filter, I do want to talk a little bit more. Filter is a bit different than take and drop. Filter does have to observe the value. This value here is observed by the predicate passed to filter which means that if you have getters on your IteratorResult objects, which is a weird thing to do, but if you have those, you may have some kind of weird behavior. You know, you could yield values for which the predicate returned false by having the value getter return different values for the predicate versus when you are actually consuming the resulting iterator. And similarly you can have it yield values that were not passed through the predicate. So because of getters, both on value and done, filter can be a bit strange here because it observes the value. But take and drop do not share that problem. They don’t observe the value and if you have a getter on done, it just kind of changes the behavior but it doesn’t lead to something unreasonable like with filter where you wouldn’t expect any of the values coming out of the filtered iterator to have not passed the predicate. So I can understand not wanting to do this optimization for filter for that reason if we care about that use case. + +MF: I guess as a little bit of context, I kind of maybe should have led with the context. Anyway, `yield*` already does reuse IteratorResult objects in this way. It actually is inconsistently implemented in engines. The spec says that `yield*` should reuse IteratorResult objects, and reconstruct IteratorResults with the value given. JavaScriptCore and LibJS don’t comply with spec, they create new objects. So this would be matching `yield *` in that way. + +MF: Other context. How I originally discovered this issue is that ABL opened a pull request for test262 for `iterator.concat` that asserts this behavior. And `iterator.concat` is another place where we could possibly reuse IteratorResult objects if we chose the optimization across the iterator helpers. Whatever we choose to do with these, whether we choose to reuse IteratorResult objects or not reuse them, we should follow that precedent with Iterator.concat. This is a necessary decision to be made whichever way it goes before we could move `Iterator.concat` forward. This is an area not tested in previous iterator helpers but the `iterator.concat` tests are thorough and asserting on the identity -- or rather lack of identity -- of the IteratorResult objects. So I will be asking for Stage 3 for that proposal later in this meeting and I will need to resolve this open issue before then. I have an open PR on that proposal to align it in either direction if needed. That’s all I have for slides. So happy to answer questions and have a discussion on this. + +MF: As far as my own personal preference on which way we go, I don’t really have a very strong preference. I would generally prefer to reuse objects if implementations find that it would be helpful, and I think generally we can assume that it would be. But if we get negative signals from implementations, I’m fine also going the other way. I just want the question to have been addressed within plenary so that we have set a precedent going forward. That’s it. + +RGN: This is not the strongest position, but Agoric are opposed to reusing the IteratorResult object because of the weird behavior that you alluded to. If I could just run down briefly the list of things that we considered, Number 1 would be that even the `take` and `drop` helpers have to look at `done`, so despite not inspecting `value`, all the weirdness is still possible. Number 2, it results in the inability to accurately shim this behavior using generators because generators aren’t going to reuse the IteratorResult objects. And Number 3, back on the weirdness, any extra properties of the result object beyond just `done` and `value` would sometimes be visible and sometimes not depending on which helper was used. So all things considered, it would be most convenient for our use cases if the reuse were limited to `yield*`. But because it already exists in `yield*`, this is not a blocking objection. That’s it. + +MF: Thank you. + +KM: Doesn’t do think but I’m happy to allocate-less objects when possible especially in things that run inside of loops since those tend to run a lot. + +CDA: That’s it for the queue. + +MF: So I guess I can share—let me share—I have some feedback from ABL on the pull request itself. You can also open this yourself. I’m going to open it in one second. Pretty much just from what I understand inconclusive at this point about whether or not SpiderMonkey would benefit from this change. It’s not written exactly as identical to `yield*`. There would be one final access of the value at the end. So if we were trying to look for it to be a way for implementations to implement in JavaScript as using `yield*`, we would have to slightly change that. But I would still be open to it. Because one extra value access is probably minor compared to, as KM said, something happening in a loop. All of the IteratorResults yielded by the iterator are able to be reused. It’s 1 to N. But so far, you know, I think without more prototype work, fully changing up the implementation, we won't have actual numbers on this. I think either way, it’s probably not a huge performance difference. It’s just that we need to make a call in either direction. I was hoping to hear more from implementers if they had opinions there. + +DLM: First off, I have to admit I haven’t read ABL’s comment and not fully up to date on this. When we discussed this last week more or less into the idea that currently using generators and that’s not ever going to be really opt optimal for us so chances are we eventually write this code and it doesn’t make sense to object to the optimization based on the implementation when we consider changing it in the future anyways. I don’t think we should be specifying that closely to the particularities of implementation. But I would be interested in hearing—I expect this doesn’t affect V8 and it sounds like KM is in favor of this. So I think we’re more or less neutral. + +MAH: So thanks for that information on implementation status of `yield*`. But if not all engines agree on what yield star does, maybe they also provides an opportunity to change the yield star implementation and align it to whatever we decide or not change it if we don’t need to. If we decide to reuse. + +MF: We could. That seems like a regression to me. If anything, I think we should be leaning towards reusing the objects unless we have good reason not to. We heard from Agoric that they think it’s a bit weird with how getters can make things behave which is, you know, a reason not to, but it’s a balancing act. + +SYG: All things considered, I like allocating less. I don’t know if this is straightforward when—it is straightforward when allocating less. But are we going to have inconsistency? Like, are some things going to reuse and some things are going to recreate? And we’re going to have to know? + +MF: Yeah, the only opportunities for doing this are where the iterator helper works on some underlying iterator and passes the exact same value that was yielded through to the result. And the only ones I’m aware of right now are take, drop, filter, and `Iterator.concat` in the iterator sequencing proposal. Other ones like `map` are not going to be able to reuse an IteratorResult object because they don’t yield the same value, it yields a potentially different value. So it’s inconsistent in that way, but it would be consistent in that all helpers that pass a value directly through will pass – + +SYG: Is that true of map? You could mutate the IteratorResult object to update the value. + +MF: It could mutate. That’s right. + +SYG: It’s not clear to me if the principle is to—if the goal is to have fewer allocations, why not also do that? + +MF: I would be happy to explore that possibility. I wouldn’t have thought that that was feasible within this group. But if it is, I can come back with another proposal that tries to actually reduce allocations in that way. + +SYG: I’m very happy with the goal to reduce allocation. I think my only worry is if we have somewhat open ad hoc thing for good reasons or bad reasons on an individual iterator helper by helper basis, that seems to be like maybe interrupt issue in the future. This might be easy to miss or something. + +MAH: You cannot reliably mutate an object because you don’t know where the object is coming from. The property might not be configurable or writable, you don’t know what is going on there. + +SYG: That’s fair. + +NRO: Just going to say it’s a random user provided object. It’s a bit weird to mutate it. + +CDA: That’s it for the queue. + +MF: Okay. Well, it looks like that more expanded one is not very possible. Thank you for that feedback. So it looks like we’re still considering just the scope that I had originally presented of take, drop, filter, and `Iterator.concat`. I think I hear fairly weak arguments on either side and given that, I think my preference is to ask for this change. If there’s opposition to that I’d like to hear it. Otherwise, I would like to ask for this change. And then for it to set precedent for `Iterator.concat`. + +NRO: I am very slightly against doing this for the consistency reason that SYG mentioned as in some methods do it and some others do not. And maybe it’s obvious to us that the rule is does this method need to move the object or not? But in general it’s like it might be less obvious to other people why summary is object. I think it’s fine if `iterator.concat` emerges from this mostly because it’s a static method and it’s not one of the methods on the prototype. + +MF: I can see that it’s not absolutely necessary for us to be consistent here. I just thought it was nice. But I would be fine with inconsistency if that’s what is requested. + +KG: I’m also slightly against doing this. Mostly just for the like—it makes the general shape is less consistent, which I’m worried about not just for users but for engines as well. I have slight hopes there’s some room for optimization here in engines to skip allocations in a lot of cases. I think that gets harder the more complex we make this. So my inclination is to keep the machinery as simple as we reasonably can which means making it implementable with the generator and making all the methods implementable for the generator. In case we go the other direction, I want to mention that you have been saying during the filter/take/drop you can do it for flatMap too: if you get the iterator out of the mapper function for flat map you can forward those iterator results. + +RGN: We had only limited on-the-record participation. I’m wondering if a temperature check is appropriate. + +USA: That was it for the queue. + +MF: Do the chairs think we should do a temperature check? I’m also okay with asking for the inconsistent proposal from NRO: that we do not do this but we do the optimization for Iterator.concat. That’s also fine for me. I can see that argument. + +USA: As it is, it is checks are not binding. I don’t see why not. + +MF: As long as we have time remaining in the time box, I would be okay spending five minutes to do a temperature check. + +USA: Okay. Then let’s do a five-minute temperature check. Would you like to define precisely what the question is and then I can start the temperature check. + +CDA: Quick point of order on this besides what is important to define those. Everybody needs to have TCQ open at the time when we start the temperature check because if you come afterwards, the interface will not pop up. One of the many quirks of TCQ. So if you have any opinion on this, please make sure that you have the TCQ pulled up. You can see the queue or logged in, et cetera. And then that would enable you to see the interface and make a choice on that. Maybe we’ll give, I don’t know, 30 seconds just in case for that and then – + +MF: I can explain the options while we do that. I see three options. The first would be doing the reuse of IteratorResult objects for both existing iterator helpers and for `Iterator.concat`, reusing IteratorResult objects for neither, or reusing IteratorResult objects only for `Iterator.concat`. + +CDA: I pulled up the interface. If you can take a look, Michael, to see what people will be presented with. And then you can define what each of those things mean. Or if you like you just said – + +MF: The scale is not the greatest thing. + +NRO: A suggestion. Can you do two separate checks? One comparing here to the status quo for the prototype method and then a separate one for iterator concat. And people can vote the same in both if they prefer both to be consistent with each other or vote if they prefer my approach and prefer not to vote if they want to never use the objects. I think it’s okay to have two polls rather than one in this case. + +KM: I will copy whatever the question is that is stated into the topic so people can see it also. + +MF: I’m fine with NRO's suggestion that we ask about this optimization for existing iterator prototype methods. I don’t mean to call it an optimization. Reuse of IteratorResult objects for existing iterator prototype methods. Express your positivity or negativity on the topic using the emoji scale that we have. + +CDA: Do you want to use the meanings that are currently ascribed there or did you want to provide your own – + +MF: That’s the best that we have. + +CDA: Okay. We’ll give this maybe, I don’t know, another minute. I don’t see any more responses trickling in. + +MF: Then it looks like we are very slightly leaning negative on that, the reuse of IteratorResult objects. So we can run the second poll about iterator.concat. We can do that now or later during the iterator concat section. Either way. Are we directly following this with iterator.concat? + +CDA: Yes. Iterator sequencing for Stage 3 is next. + +MF: Then we may as well do it now since we can kind of combine the topics. + +CDA: Okay. + +MF: So the question here is, yes, do we want to reuse the IteratorResult objects for iterator.concat? Given the prior knowledge that we don’t want to use those IteratorResult objects on the `Iterator.prototype` helpers. + +CDA: I’m just going to pull up the last result to see. I don’t recall how many responses we had total. It looks like we had at least 14. And we have about the same number here. So, of course, the numbers can be skewed a little bit because you can vote for multiple things, vote is the wrong word. It’s a multiple choice selection, shall we say? All right. I think things have stopped trickling in. + +MF: This one looks fairly convincingly positive. So I will—when we come to that discussion, I will assume that we are making that change for iterator sequencing. So I think that’s all unless anyone is in the queue, I think we’re decided on this. + +CDA: I did not note who is unconvinced on this one. Folks who were unconvinced, did you want to make any remarks that you haven’t already made? + +JHD: I put indifferent on the last one. Do we care that there then won’t be consistency with the other iterator helpers? + +MF: That’s what the vote was for, right, is that – + +JHD: Right. + +MF: Knowing that, the existing iterator helpers are not going to reuse IteratorResult objects, do we then want to reuse them for iterator.concat. And the result there was—it was fairly positive. + +JHD: Okay. So nobody including myself, then, is hung up on the inconsistency or the inability to use yield star to polyfill it or shim it or any of that stuff? We’re just kind of like, sure, let’s take the opportunity while we have it just verifying? + +NRO: I’m on the queue and this has been discussed in the matrix. If you want to polyfill generator concat we should not use this—take and drop, we should not reuse. For concat should reuse because it will be polyfill with yield star and it will be used. Inconsistency of concat and take filter is what makes it possible with polyfill generators. + +MF: I’m happy with that conclusion. + +CDA: Before you go to the next topic, would you like to dictate a summary and conclusion for the notes? + +### Speaker's Summary of Key Points + +MF: We have rejected the proposal to reuse IteratorResult objects for existing iterator helpers on `Iterator.prototype`, not setting precedent for `Iterator.concat` but setting precedent for other `Iterator.prototype` methods in the future. + +## iterator sequencing for Stage 3 + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-iterator-sequencing) +- [slides](https://docs.google.com/presentation/d/1EHMDcnV9zJ1E7BRhKmYtzHchZvOzjWynR3W-VdNxglw) + +MF: Okay. Stage 3 is mostly formality now. So we have tests as a pull request to test262, and they're not merged yet. ABL opened this pull request, I don’t know, about a month ago. I reviewed it. I added a couple of tests that I could think of that were missing and it’s now all good for me. I know JHD has run it against his polyfill at various points. I’m not sure if his polyfills are fully passing those tests yet or not. But I am happy with the state of available tests for this proposal. + +MF: I have this pull request open for `Iterator.concat` to reuse IteratorResult objects which is that topic that we just talked about. Based on the result of the last topic, I will merge this and update the one test in the test262 pull request to match. And that is all. So I would like to ask for Stage 3. + +SYG: Can we have them merged before the end of the meeting? I’m uncomfortable agreeing to Stage 3 if they’re going to sit in an open PR for some amount of time. + +MF: I’ve asked test262 reviewers and they weren’t going to have time to review them between then and the meeting. I’m happy to make it conditional on the tests being merged if that’s what we want to do. + +SYG: I feel somewhat strongly like that. The point of me of extra tests for Stage 3, multiple points. One is get the proposal author to think at a deeper level at the per step level and also if Stage 3 is throw over the fence for implementers the point of having the tests is that we don’t have to reinvent the tests. If they’re not yet merged we have to do that. I want them to be in the repo to be runnable. Doesn’t have to be in the main trunk. That’s why staging exists. I want them to be runnable tests in the repo at the point of Stage 3. + +MF: Yeah, I’m mostly focused on the former. I think it has caused us to think more deeply about all the minor semantics of it and we have now done that. But I understand that you would want to actually have them available for you to run in test262 and they’re not currently in the repo. + +SYG: As for conditional, I’m happy to give conditional if we get like—if basically they’re merged by the end of day 4. If there’s no time for—if there’s no cycles for that, I would rather wait. + +MF: Okay. + +JHD: So the tests were great. They helped me catch some bugs with my polyfill and there may be one or two—I think there’s one test that is still failing but I’m convinced now that that is solely due to the fact that I’m trying—that I’m manually reimplementing generator state machine stuff without using generators so that sort of my—I have dug my own grave for that one. But I’m convinced that the tests are correct. So I think that they’re ready to be merged once they’re re-based and the change is discussed. I’m happy with Stage 3. + +MF: Sounds like we have to hear from any test262 maintainers to see if they would – + +JHD: I’m in that group. So I will—if no one else wants to look at it, I will merge it once it’s passing or once it’s – + +SYG: To avoid putting other maintainers on the spot, can I make a concrete suggestion of having a two-minute extension at a later date and people would have some time to decide whether they want to defer on the review or press the button and then like you come back and say okay now it’s merged and then we get Stage 3 instead of putting people on the spot right now. + +MF: It’s fine by me. I don’t know. JHD, is that okay? + +JHD: Wouldn’t be up to me. We have done conditional approvals in the past and approved once merged. If you’re not comfortable doing that, there’s nothing wrong with waiting until the end of the meeting or something to bring it back up. + +MF: SYG, am I taking this to be general feedback that if I submit tests for a proposal and it’s thoroughly tested and it’s had a review from somebody else and not been merged yet that I should hold off asking for Stage 3 advancement in the future until it’s been merged? + +SYG: My preference is land it in staging and wait for the other review to—like, if you’re convinced that they’re correct, I’m happy to take the champions assumption that they are correctly written. And as long as they’re in the easy to access and executable part like staging, then when they kind of graduate out of staging, you can work on that at your own time and then you don’t have to wait for the full maintainer sign off. + +MF: Okay. + +SYG: Either that they are merged and you have the maintainer sign off or just in staging. That’s my preference. + +MF: I will take that path in the future, then. I’m going to ask for an extension item sometime later in the meeting where we can revisit this assuming that the tests have been merged – + +CDA: Okay. And the assumption is that that should be later in the meeting as much as possible? + +MF: Yeah, I guess as late as we can make it. + +### Conclusion + +- MF will wait until the test262 tests have been merged before asking for Stage 3 again. +- This topic was not revisited later in the meeting. + +## ShadowRealm for Stage 3 + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-shadowrealm) +- [slides](https://ptomato.name/talks/tc39-2024-12) + +PFC: My name is Philip Chimento. I work at Igalia and I’m doing this presentation in partnership with Salesforce. This is a short recap and ask for Stage 3 for the ShadowRealm proposal. So a quick overview of what is ShadowRealm. It’s a mechanism to execute JavaScript code within the context of a new global object and a new set of built-ins. So you create this object and inside it, it has an entirely fresh JavaScript execution environment. You can evaluate code in it, you can import other modules into it and they will be unaffected by anything else that you have done to the global object outside of the ShadowRealm. There’s a little code snippet here showing that it’s not affected by a global variable of the same name on the outer global object. + +PFC: People get antsy when you mention the word security in the context of ShadowRealm. It is not security but integrity. You want to have complete control over the execution environment. It’s not a security boundary. I also asked chat GPT to draw an illustration of ShadowRealm and came back with an "eerie otherworldly domain filled with dark energy and mysterious elements. Let me know if you’d like any adjustments or additions!" I think I have nothing to add to this. This is an exact depiction of what it looks like. + +PFC: The history of the proposal. At this point, everything seems to revolve around the question of which web APIs should be present inside ShadowRealm? Over the history of the proposal, we had several different answers to this question that we don’t like. One possible answer was none. We don’t like that because if you create a ShadowRealm in a browser, there’s no obvious reason why you shouldn’t have something like, I don’t know, atob() and btoa() or TextEncoder or TextDecoder in ShadowRealm. They’re not intrinsically tied to the browser. That confuses developers because the answer comes down to can I use this facility inside of ShadowRealm? You have to know which standards body standardized it. That’s not a great answer. We don’t like the answer of having no web APIs present in ShadowRealm. + +PFC: The other answer is a vetted list. We don’t like this answer either. For several reasons, but the main one; still, how are developers going to know whether we can use something or not? Telling them to go consult a list is not that much better than, you know, telling them to look up by which standards body standardized the API. Another possible answer was a criterion based on confidentiality, which got us closer to an answer, but in the end, people found that criterion hard to evaluate without getting into the weeds. That’s something we want to avoid. So in a couple of slides, I will present the answers we have now for this. But this is kind of the history of how the various answers we have had to that question. + +PFC: The proposal has been at Stage 3 before. In September of 2023, it was moved back to Stage 2 due to this question basically, which web APIs should be exposed—and also for concerns that the test coverage for these Web APIs wasn’t sufficient. So in that meeting, we made readvancement pending two explicitly supporting implementations that the testing and list of APIs exposed to ShadowRealm are sufficient. In February of this year, we advanced the proposal to Stage 2.7 with the understanding that that the Stage 3 requires sign off from the HTML folks on the HTML integration as well as resolution of Mozilla’s concerns about the test coverage. At the time that the proposal was moved back to Stage 2, it was noted that this is not an opportunity to relitigate design decisions. This is a narrow scope for answering these questions or concerns. + +PFC: So what is the state today? Which Web APIs should be exposed inside ShadowRealm? We have written a design principle for the W3C TAG that governs whether spec authors should choose for something to be exposed everywhere or not. So a little bit of background on this. The web spec had an "Exposed" annotation that was like you could say, if something was exposed in windows and workers. As part of the preparation for the HTML integration of ShadowRealm, it gained an "Exposed everywhere" attribute, and so this design principle tells spec authors when to use that exposed everywhere attribute. The principle is that only purely computational features are exposed everywhere. So that means, features that perform I/O are not purely computational, features that affect the state of the user agent are not purely computational. As an additional exception, anything relying on an event loop is not exposed because one place where things can be exposed is in worklets, which don’t necessarily have an event loop. And then the final part of the principle is to expose conservatively. So features that are primarily useful for other unexposed features are not exposed. An example of that is Blob, which is a purely computational Web API, but mainly used in the context of I/O, so we should default to not exposing that, unless there’s a really good reason for using it by itself. + +PFC: So we developed this design principle based on a number of conversations with implementers and web platform experts. Tried a few different iterations. I think that people are mostly happy with this one. There is a clear criterion for spec authors to decide whether something is in or out. And the distinction that mentioned before, that distinction doesn’t exist. That’s irrelevant with this. If you want the full list of the 1300+ global properties that are available in web environments, you know, with which are in and out and why, there’s a spreadsheet to click through there. + +PFC: The current state of the HTML integration, there’s the pull request to click through here. The design is settled and there have been reviews. There are some details still being worked on, in particular, some mechanical work needed in specs downstream of HTML to use the new terminology of principal settings objects and principal global objects. + +PFC: We talked earlier about test coverage. So I will show you an overview of things that are now covered with this. APIs that have test coverage in web platform tests run in ShadowRealm. So one thing we did was, to not just test in a ShadowRealm created in a regular browser window, but also to test everything in ShadowRealms created in multiple different scopes. So you can create a ShadowRealm and run code inside it from any of these scopes listed here: window, worker, shared worker, service worker, audio worklet, and another ShadowRealm. Sometimes testing an API might succeed in one of those and fail in another, if there are, for example, assumptions that the global is either a window or a worker, which sometimes exist in code. So now, tests run in ShadowRealm scopes in web platform tests will be run in all of these scopes by default. + +PFC: I have got a list here of all of the web APIs that are exposed according to the new criterion. Links to PRs adding web platform tests for testing those in ShadowRealm. Some of these PRs are still pending review. + +PFC: Here they are. Abort, Base64, console, et cetera. There’s several slides for this. You can click through to the PRs, if you want to see the details. A couple of these like crypto, and URLPattern are separate specs, and those—we have additional integration PR to add the exposed attribute of those specs, which is up to the authors of those specs. + +PFC: There are a couple of things that are exposed that don’t have any WPT coverage. TransformStreamDefaultController, WebTransportWriter do not have tests in any realm. But when they do get these, we will enable them in ShadowRealm as well. + +PFC: So the requirements for stage 3. Like the TC39 requirement, the feature has sufficient testing and appropriate pre-implementation experience. We can safely say that this requirement is fulfilled. Then we had the spec conditions that were imposed when we moved to Stage 2. Explicit support from the two implementations that the testing list and APIs to be exposed to ShadowRealm are sufficient. Signoff from the HTML folks on the HTML, integration and resolution of Mozilla's concerns about the test coverage. So I think that we can discuss these requirements in the queue. The—yeah. So I’ve asked various implementations about what they think about the current state of the testing and list of APIs and I am wondering if we could do explicit support on the record for that, you know, requirement during this meeting. + +PFC: The HTML integration, I think we have moved that as far as we can until we hit a chicken and egg situation. There is an agreement on the API is exposed and it needs two statements of explicit support from implementations, as per the WHATWG process. That is moved as far as forward as it can go until we get this positive signal from implementations, which I am hoping we can also discuss in this meeting; and resolution of Mozilla’s concern about test coverage. I have talked to MAG a couple of days ago, and it looks good, but he’s going to take a closer look. I am hoping we can also discuss that on the queue. + +PFC: So let’s move to the queue now. This is a fairly short slide deck, but I am expecting a certain amount of discussion. I think the majority of the time will be spent on that. + +RGN: Yeah. I had a question about the TAG guidelines that came out, where you mentioned I/O as being excluded from exposure in ShadowRealm, and I wanted to know, is it actually I/O, is it input *and* output, or just input. Because APIs such as `console.log` do produce output and have proven useful for anyone with access to the debug console. + +PFC: This is a very good question, and I have actually mentioned console as a particular example in the design principle guideline. Technically, the console is I/O. It definitely prints a message in the developer tools in the user agent. It affects the state of the user agent. And it might also write a message to a log file. But given that you can’t—like, this output is unobservable from JavaScript. You can’t use another API to read in the messages that were output to the developer console. And the practicality of having console in all environments weighs strongly in favour of including console. Console is kind of a debatable case, but I think everybody I have talked to feels it needs to be included. And I certainly strongly agree with that. Like, not having a console in an environment would be very weird. + +RGN: Okay. Yeah, I agree as well. I would not want to see a guideline that was worded too broadly used as justification for excluding `console`. Thanks for the clarification. + +WH: I have a question about “purely computational”. Does it mean that, no matter which environment you run in, the result will always be the same? Or can the result depend on aspects of the environment, such as locale, what hardware you have installed, or such? + +PFC: So it should not depend on what hardware you have installed. But it is also not the case that it will be exactly the same no matter what environment you run it in. For example, we have exposed the isSecureContext boolean global property, which will be true, if you have created the ShadowRealm inside a realm that is a secure context, and false if you created it inside a realm that is not a secure context. So I would have to look at the W3C PR for the particular definition that we want to use in the design principles. We are leaning on the definition of not performing I/O and not affecting the states of the user agent or the user’s device. + +WH: My question is regarding manipulation of external state. Can you read external state, such as the locale, or various state similar to that? + +PFC: Reading the current locale is a capability that is exposed by ECMAScript itself. It would be difficult to say that a JavaScript environment couldn’t do that. The same as Date.now. + +WH: Okay. Can this form a one-way communications channel? And do we care? + +PFC: Do we care? Good question. Like I said early in the presentation, the goal of the ShadowRealm proposal is not security, but integrity. So a ShadowRealm is not useful unless you do have some sort of communication with it. The point is that you—right. I am not an expert on what kinds of things can be used as a communications channel. But I think that is pretty much covered by the callable boundary. + +WH: Okay. I just wanted to understand how deep into the prohibition of “I” out of “I/O” this is going. Thank you. + +KG: Waldemar, I recommend looking at the spreadsheet as well. There are a lot of examples there and that might be a list of—if you are familiar with the web APIs anyway. + +SYG: Are the WPT tests merged? + +PFC: Some are and some are not. You can see which ones are still pending in the slides. I’ve updated it as of Friday, I think. + +SYG: Yeah. In a similar vein as having Test262 tests merged, what is your read on getting these merged ASAP. Stage 3, is implementer stage—this is I think more important to get this merged than Test262 because it’s not as easy to discover because there’s a bunch of different PRs + +PFC: Yeah. That makes sense. If it were all in one PR, that would be probably not realistic for one person to review the whole thing. But I don’t see any currently any obstacles to getting these merged, except for just review capacity. + +SYG: Okay. Thanks. To be clear, I would be more comfortable with Stage 3 once they are merged. I have no other concerns than that. + +PFC: Okay. + +NRO: Yeah. Relative to what SYG said. I don’t know how it works, but I guess we can merge them as tentative. They use this tentative marker for test that are not fully confirmed for some reason. + +PFC: Some of the tests are tentative ones like the Wasm integration ones. It’s not feasible to do it tentatively, because it takes already existing coverage and adds a flag to it that says, run this in ShadowRealm as well. So these tests are not already not tentative. We might be able to do that somehow in the test harness, where it marks only the ShadowRealm as tentative. A number of the PRs have been merged already, and if we can get reviews on these, that would certainly be preferable to using the tentative flag. + +DLM: I just wanted to answer the specific question of whether or not Mozilla is happy with the test coverage? I haven't remembered that we are the gatekeeper there, but we would like to recognize that a lot of work has been under tests and we no longer have concerns about the test coverage. + +PFC: Okay. Thanks. + +KG: Yeah. I really like the principle of pure computation. I did want to raise some wrinkles, and all of these have come up on the various threads of which there are several. I don’t necessarily think this needs to hold up the advancement of the proposal except maybe in one case, which we can talk about. But I do want to try to get more clarity about what exactly pure computation means. In particular, you have the webcrypto stuff not being included. I don’t understand how that can fail to be computation. And it's not like a trust store and that doesn’t even use hardware. Most of the time, you can shim subtle crypto. + +PFC: Can you? My understanding was that it required access to a trust store. If it doesn’t, then we should take another look at that, I guess + +KG: Most of it doesn’t. Maybe there’s some I am not thinking of. But the basic shot 256 array buffer doesn’t. And like encryption and decryption, maybe there’s some other things I am forgetting. If that’s an oversight, that’s fine. And then there’s some things where it could in principle be implemented in WebAssembly, but probably has hardware, like video encoding and decoding is the example here. You have that excluded on the basis of being mainly useful for IO. Which – + +PFC: Yeah + +KG: I think it is basically fine. But it doesn’t answer the question of, , like, , you know, assume there is some hardware module, it was useful for some operation that we think is reasonable to perform in the ShadowRealm. Does the fact that it is done by dedicated hardware mean that it’s not usable in a ShadowRealm? And WebGPU is maybe the example here. And I forget if that shares state with other WebGPU stuff throwing on the same page. If it doesn’t, it seems like that is basically pure computation. + +PFC: Yeah. We did discuss this on the thread about the design principle. I don’t really have a strong opinion on that. I feel like, if it could be emulated in WebAssembly, there’s no reason to keep it out. But basically, I don’t know enough about what use cases people would want for WebGPU in ShadowRealm to say, that should be out because it’s non-CPU computation or whatever. Or it should be in because you can do this and that with it. I would say, in the absence of anything else, it’s out for reasons of primarily being useful for other things that are not exposed in ShadowRealm, but I, – + +KG: Yeah. These days, there’s a lot of use that is LLMs. And that’s not unreasonable to use in a ShadowRealm + +PFC: You mentioned in your queue item about audio worklets. + +KG: Yes. Maybe this was the result. But the audio—some of the people that work on audio, at Mozilla, had this concern about not wanting to allocate memory in an audio worklet. And I think that’s a major concern for audio worklets, that shouldn’t carry over to ShadowRealm. It just complicates this expose = star thing. If this implies, you know, exposing TextEncoder and code into audio worklet, if they don’t want that—is there a resolution to that? Was the plan to do it anyway? + +PFC: So I think these are two conflicting viewpoints and both are reasonable. One is that audio worklets must not expose anything that allocates memory and the other is, well just don’t do that then in audio worklets. + +KG: Right + +PFC: Neither of these are unreasonable. I think the latter is the more commonly held position. And so like that’s what I have proposed in the TAG design principles issue. I don’t have a strong opinion on this position, but I don’t like the idea of keeping things out of audio worklet that are otherwise exposed everywhere. And you know, if that happens, if the TAG decides that the former viewpoint of, you must not expose anything in audio worklets that allocates memory, then it’s I think it’s better to just make like the HostInitializeShadowRealm operation throw if the incubating global is an audio worklet. + +KG: That would work for me. I don’t have a strong opinion about the audio one or the WebGPU one. But it… I guess the—there is still some edge case that is unsolved. I am fine with going forward with the principles written and the list that you have with the change to expose crypto. I just wanted to talk through these + +NRO: Yeah. Just a clarifying question. What is—what APIs allocate memory? Is new array buffer, + +KG: Array buffer, yes. Probably not an object. The concern was specifically stuff where it’s like unbounded or based on user input. And yeah. + +PFC: I guess TextDecoder is an example of I think why that fight is kind of already lost. Because for example TextEncoder already has the exposed everywhere attribute. But in an audio worklet, you are only supposed to use encodeInto() on an already existing buffer. Because that doesn’t allocate the new buffer. So that ship has already sailed. TextEncoder is already exposed everywhere. So, you know, shrug. + +KKL: I know you haven’t—lots of folks were expecting to hear from us from the hard and JavaScript community about this proposal and I want to make this explicit, that we were unworried about any particular decision, though elated that come up with a criterion that was enough to make thousands of small decisions and make progress on this change. I wanted to remind folks that the reason they are unworried is because the capabilities are deniable in a ShadowRealm by the code that run first there, because we got in early. The requirement that properties of the global object kind of ShadowRealm are all delete-able and oner that, thank you for manufacturing through this. This is despite we find that the—while we are elated that the—that the implementers and other specification authors find that the criterion is sufficiently inambiguous, they are able to use it to make a lot of small decisions, we recognize there are ambiguities to it and unworried for the prior reasons. For example, one ambiguity, the criterion of nothing that schedules to the event loop. That obviously does not limit the use of the microtask queue, I assume that event by event loop, we mean the IO scheduler. And that’s all I have to say about this and thank you and good work. + +PFC: Yeah. To answer the question about microtasks, queueMicrotask is reasonable in the ShadowRealm because it does not rely any more on an event loop than `Promise.resolve` does. + +CDA: Nothing in the queue. + +PFC: All right. In that case, how do folks feel about moving the proposal to Stage 3? + +SYG: As I said before, I would be more comfortable with Stage 3, once the tests are merged. and let be reflected in the record, there are no concerns other than the mechanical than having the tests merged. Before the WPT coverage is there, I am not comfortable signalling that it’s basically ready for implementation because it’s too easy to slip through the cracks. + +PFC: That’s fair enough. Other than that, are there any concerns that I should be aware of when bringing this back when the tests are merged? + +CDA: I am not sure who that question was directed to. + +PFC: Everybody. + +KG: As long as we are okay with tongue bikeshed smaller things like `crypto.subtle`, and you know with the potential open question of the interaction with audio worklets specifically, which I will fine with leaving those open. + +PFC: Okay. I will look in more detail into which parts of `crypto.subtle` are able to be exposed. + +NRO: Yeah. So PFC said we’re in the chicken and egg situation, with what—you need supports from browsers to merging the HTML Q we are going to assume that once this proposal is here, which is Stage 3, then browsers having implicitly supporting it in the WHATWG integration or looking for something more explicit? + +PFC: That’s a good question. And I would actually like to hear what other folks think about that. + +SYG: That’s my understanding. Sorry, I am not in the queue. Let me check the queue. Is it okay if I jump the queue without typing? + +CDA: I think there’s nobody else on it. + +SYG: That’s my understanding, that because the browser vendors are in the room, in TC39, we should not get something to Stage 3 if we do not have browser consensus. So if there are concerns within an individual browser vendor, such as the HTML DOM side of I don't know if your team do not agree to ShadowRealm, you should not give consensus. My understanding is, once we give Stage 3 consensus here, that implies at least two. Like, it implies all three. Actually. + +KM: I may have to tentatively give spec things but I need to double-check. I think it’s fine, but I need to double-check with our folks. I wasn’t aware of this meeting + +DLM: We have some concerns around this area. So I have not been attending—I am not really involved in the HTML side of things. But from what you have heard from people at Mozilla that are involved in that side of things is that there’s no real objections. But neither are the real statements of interest, on the HTML side, and we would need browser vendors to express interest in the implementing this and for this to go ahead in the HTML side and what I heard last week, that’s not the case right now and I don’t really feel that my statement of support or objection here is any any way speaking for the DOM team at Mozilla. + +DLM: Just to be clear, I don’t feel the same way as Shu. I don’t feel like that I can—you know, a lack of objection for me, here it doesn’t mean that our DOM team is going to express an interest in influencing this on the HTML side. + +PFC: It’s good we discussed this. + +SYG: You said a lack of objection from you here does not—sorry. I missed the second part. + +DLM: Sorry. That was not clearly stated on my part. I am not speaking for the DOM team. I feel like that is a separate group that needs to be convinced that this should be implemented. I realize, you know, you and I both work in browser vendors as well, but in my case, I am only speaking here as a TC39 representative, and there’s no kind of internal consensus between us and the DOM team about implementing ShadowRealm. So this is something that I have brought up in our internal meetings with them. And I have spoken to the people that are representing us at WHATWG, and my understanding is this was discussed either last week and it sounds like none of the people in that meeting expressed an interest in implementation at this time. + +SYG: I see. I… I would encourage—maybe we all need to sync up as browser vendor, but I would encourage the other browser vendors to not give consensus here, if it meant—like in this case, in particular, since so much of the proposed API requires so much of the semantics is around web API integration, like if there is not a willingness to implement on the HTML DOM side of the team, our given consensus in giving stage 3 in TC39 will send the wrong signal and we shouldn’t do that. Stage 3 means it’s coming for sure, a matter of time and if you don’t have that internal consensus on the HTML side, I don’t think we as the representative of Chrome or Firefox or Safari should give the consensus until we—we should block or give consensus depending on the internal agreement. + +SYG: So that’s not to put you on the spot, but I think it is it mean that—since I am blocking on properly ground this any way, until the test is emerged, I think I would request the other browser vendors to get internal consensus or lack thereof, later on the internal agreement before Philip comes back for Stage 3 ask. + +DLM: Yes. I am definitely willing to bring this up internally again. And yes, I agree with you, and this is something they also said about the risk of sending the wrong signal here. I am also sensitive to, it feels like we’re moving the goalpost a little bit in terms of, you know, the work that the ShadowRealm champions have done. But yes, I mean, I—yeah, I can’t really disagree with you, Shu. I do—it’s sending the wrong signal if we say this is good for Stage 3, and there’s no kind of expression—and to be clear, there was no—my understanding of what was communicated to me was that there was no objections to this. It’s just there was no particular statement of interest in implementing this any time soon. I will work to clarify that before this comes back + +KM: I guess, my—again, I—in addition to what DLM said, I have basically the same feedback, except from—you know, I don’t have—I probably have less conversations probably. So it’s kind of, like,—but I did not hear any objectionion, but also I did not hear any strong desire from the HTML folks to do this work. But I will also ask them and come back. + +PFC: Okay. Before bringing this back, I will be in touch with all of you asking about how these conversations went. So I think I will withdraw the request for consensus. And come back at a future meeting, probably February, after the tests are merged. + +MLS: Yeah. You are talking about—I am the same as KM, I haven’t talked to them—the HTML folks. But—sin the right venue for all browsers? Because they have not just the people that are with our companies, but others are the right ones to indicate their interest in moving ShadowRealms forward, on the HTML side. + +SYG: it’s a little bit tricky because it’s kind of like—yeah. I guess it’s kind of chicken and egg, but I mean… it just feels existential. If we agree to Stage 3 and no browser ships it, that's bad. If we agree to stage 3 and a subset of browsers subset it, that’s also bad. If the HTML sides whether we ought to ship at TC39 proposal, I would like the consensus to completely agree between the two. Where the conversations happen is a great—I don’t know where to best facilitate that. But are you suggesting that we all go to the WHATWG meeting to hash it out there? + +MLS: It sounds like we need to have a homework assignment. On a TC39 from browser companies to have this conversation internally. But it seems to me that there’s also been these discussions—so we are having a discussion right now and we have had discussions on ShadowRealm in the past. And we’re inclined from the TC39 point to move forward. But is there the same kind of inclination with WHATWG? Because in TC39 obviously browsers, you say SYG, agree. We want browsers' support. But it’s the whole TC39 that’s why they want the proposal. And I know the WHATWG is a little different than what we as far as their makeup. But TC39 if they want it as a whole, that helps the conversation between the browsers. + +SYG: I agree. I am not clear if there’s a concrete suggested course of action that was different than my suggestion. + +MLS: I don’t know because it—it doesn’t seem to—there is a concrete action that we need to view as TC39 delegates and that is to talk to the TC39 folks. Earlier in the slides, we seen all the PR on the HTML side that haven’t—haven’t moved forward to completion. Is the desire in each of those subvenues that they do move to completion? + +SYG: My understanding is that yes. Because the goal is that we are all agree to something we will all ship. And if we allow things to move to Stage 3, but then due to whatever external reasons. TC39, we don’t ship, I think that is a breakdown in the norm of, like, working in TC39 at all. Then why do we agree to Stage 3 if we agree not to ship without good reason, just to like external things that come after we agree to Stage 3. + +CDA: There is nothing on the queue. + +MLS: So I was muted. Let me respond to you, Shu. The general idea, when we proposed something for Stage 3, when it gets to Stage 3, that everybody TC39 including browsers, then there’s a ten to implement and ship it. I know that is—in practicality, that doesn’t always play out + +SYG: It sounds like we are agreed that we should have that—like, with, as implementers, should not grant—agree to grant to Stage 3 unless we have our ducks internally in a row. We don’t know we’re going to implement and ship ShadowRealm because the DOM side folks might not agree, we should figure that out before we advance to Stage 3 is all I am saying + +MLS: And I agree. And about so, you know, KM and I will do our home and you will do yours and DLM will do his and so on and so forth. + +PFC: Yeah. That’s my assumption as well. It’s good that we confirmed that. + +DLM: Sorry. I wanted to add a little bit more to this topic. Yeah. I agree. We should let things move to Stage 3, if we don’t things will be implemented in a reasonable amount of time. I will follow up with. I don’t feel like it’s my job to advocate for a proposal without DOM. We can ask for feedback, but in this case, I have raised it since this is very timely. But yeah, I would encourage proposal champions to also work and make sure that things don’t get lost on the HTML side as well. I mean, I can ask for people’s feedback, but I can’t require it. And I would also like to say, I am very happy that this topic came up now because I can see—like I think when HTML context comes up, there will be a substantial amount of work done on the HTML side as well and I am glad we established that as the rule, that we won’t let things advance to Stage 3 without our DOM’s as well. + +SYG: My intention is definitely to get clarity on, not asking the individual delegates to champion these proposals that you are not championing. + +PFC: So that’s an action for me, which I will definitely take to heart. + +DLM: Yeah. Just to follow up on what SYG said. Yes. I am certainly quite happy to ask for opinions, but I am not going to press for opinions. So if this is fine for Stage 3, yes, the thing is that the proposal champions will have to take up with people involved in the HTML spec. + +CDA: Anything further in the queue? All right. + +PFC: Then I think that brings us to the end. + +### Speaker's Summary of Key Points + +- Since advancing to stage 2.7, the Web APIs available in ShadowRealm have been determined using a new W3C TAG design principle. +- Each of these available Web APIs is covered in web-platform-tests with tests run in ShadowRealm, including ShadowRealm started from multiple scopes such as Workers and other ShadowRealms. Some web-platform-tests PRs are still awaiting review. +- The HTML integration is now agreed upon in principle, and needs some mechanical work done in downstream specs. However, it needs two explicitly positive signals from implementors to move forward. +- The concerns about test coverage have been resolved, assuming all of the open pull requests are merged. +- We will get the web-platform-tests merged, look into what can be included from crypto.subtle, and talk to the DOM teams of each of the browser implementations and get a commitment to move this forward. When that is finished, we'll bring this back for Stage 3 as soon as possible. diff --git a/meetings/2024-12/december-03.md b/meetings/2024-12/december-03.md new file mode 100644 index 00000000..76a684dc --- /dev/null +++ b/meetings/2024-12/december-03.md @@ -0,0 +1,847 @@ +# 105th TC39 Meeting | 3rd December 2024 + +----- + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Michael Saboff | MLS | Apple | +| Dmitry Makhnev | DJM | JetBrains | +| Nicolò Ribaudo | NRO | Igalia | +| Jesse Alama | JMN | Igalia | +| Luca Casonato | LCA | Deno | +| Daniel Minor | DLM | Mozilla | +| Waldemar Horwat | WH | Invited Expert | +| Chengzhong Wu | CZW | Bloomberg | +| Jirka Marsik | JMK | Oracle | +| Jack Works | JWK | Sujitech | +| Chip Morningstar | CM | Consensys | +| Ujjwal Sharma | USA | Igalia | +| Andreu Botella | ABO | Igalia | +| J. S. Choi | JSC | Invited Expert | +| Ron Buckton | RBN | Microsoft | +| Keith Miller | KM | Apple | +| Chris de Almeida | CDA | IBM | +| Jan Olaf Martin | JOM | Google | +| Jason Williams | JWS | Bloomberg | +| James M Snell | JLS | Cloudflare | +| Jordan Harband | JHD | HeroDevs | +| Philip Chimento | PFC | Igalia | +| Richard Gibson | RGN | Agoric | +| Eemeli Aro | EAO | Mozilla | +| Istvan Sebestyen | IS | Ecma | +| Sergey Rubanov | SRV | Invited Expert | +| Devin Rousso | DRO | Invited Expert | +| Samina Husain | SHN | Ecma International | + +## Briefing on the formation and goals of TC55 (or, All About Moving the WinterCG into Ecma) + +Presenter: James Snell (JSL) + +- [slides](https://docs.google.com/presentation/d/1WnqF7y52QlPRw737ZOTC4rdmJ65-nT9BbOD05jr2sjE/edit?usp=sharing) + +JSL: Hello everyone. It’s been a while since I’ve been to a TC39 meeting. Good to be here. I was previously here as an invited expert, now I’m representing Cloudflare. I’ll talk about WinterCG and about TC55 or about all moving winterCG to ECMA. And talk about what WinterCG is. We have 30 minutes scheduled for this. I will try to get through this relatively quickly so we have time for discussion and questions and that kind of thing. If I skip over some key detail, just go ahead and head it to the queue and we’ll address the questions afterwards. But what is WinterCG and started a couple years ago. We started getting more and more non-browser ECMAScript runtimes like Deno and Bun and porffor, Cloudflare Workers and others. Really started to emerge in the ecosystem. There was a risk of fragmentation in the ecosystem where you know node might have one set of globals, you know, web platform API and Deno had a different set and Bun and whatever and risk of adding the ecosystem in the individual run times. WinterCG started as, the original idea is let’s get all the run times together to at least agree on a common set of web platform APIs that we were going to agree to be interoperable to run. And call it the minimum common API. This is basically just an informal spec that basically just says if you’re going to do your out person use, if you use streams, use ReadableStream, writable stream. It’s a minimum set of APIs that we should expect to exist in all of the run times. We should expect them to be there and expect them to be consistent with each other. + +JSL: Now, this is original set up as a w3c community group. If you’re not familiar with, you’re not allowed to publish normative specs and do notes and recommendations and informal recommendations. You can’t have anything it says normatively this is what you must do. But almost as soon as we WinterCG as soon as we put out this minimum common API draft, this document, we immediately had called from the ecosystem to say let’s have definition of compliance and had people making claims we are WinterCG compliant run time or this module is module is WinterCG compliant and had no definition of what compliance was and nothing we could enforce. + +JSL: We had other discussions about hey what do we do with fetch since fetch on the server works differently than fetch on the browser? What do we do with some of the other APIs that we were being asked to look into? For instance, streaming crypto, adding streaming capabilities to WebCrypto and that kind of thing. We discovered we really didn’t have a good structure for actually talking about normative things. We couldn’t do normative definition of compliance. We can’t really have a real clear inaction. How do we relate to WHATWG and how to relate to some of the other standard efforts. We took a step back and wanted to formalize this and come up with a better approach to how we deal with all these different questions. That’s where we’re at now with moving WinterCG into ECMA as a technical committee not just TC55. + +JSL: The charter, pretty straight forward. This is just copied directly from our draft charter right now. Define and standardize is the key part here a minimum common aPI service. Along with a verifiable definition of compliance. What is this going to mean? Minimum API is not advanced APIs. It is a list of other APIs that exist, all of them web platform APIs. So things like readable stream and URL and others in there. The intent of minimum common API is not following something new but a subset and compliance level if you are a compliant run time compliant with the spec, these are the APIs that you will have. They will pass these set of tests defined in either Test262 or the web platform tests. And this is how those things must be implemented. In order to give ecosystem a common base to write code on so we’re not fragmenting everything things don’t just work in Deno and Node, and Test262 doesn’t overlap with web APIs. We don't want to create a whole new version of fetch spec. What we might do is cover things that web platform is not like CLI APIs or anything that is needed like on a server platform. And of course all these things will be operating on the royalty free policy. + +JSL: Program of work that minimum common API is the primary piece of work for the foreseeable future and defining what that is and compliance of that is. When I say compliance, what is the subset of the web platform tests that the run times must be able to pass? Are there variations in behavior from the web platform that needs to be standardized? For instance, fetch on the server is not necessarily going to have all of the core requirements in there. Subset of those that we need to define it as out of scope for these environments, that kind of thing. For anything that is—collect requirements of non-web-browsers with input feedback. If we have a change not the spec and go to WG and say here are the requirements we are discussed and here what we identified and work with the process to make the changes if we can. So we’re not trying to change anybody’s process. We’re not trying to go around it. We really want to have a form to work within that but still be able to discuss common requirements, that kind of thing. Should it be necessary, the committee will standardize new API capabilities relative to serve side runtimes. We identified a couple of these and some of these, the key focus is the minimum common aPI. And then we have the notion of standardize and maintain conformance level and standard that is one level. Another one may be if your runtime does CLI apps, here is another set of APIs that you need to be able to support. If you’re doing sockets, here is another set of APIs that you need to be able to support, that kind of thing. Each one of those is defined as a separate conformance level. + +JSL: Working with others. We had a lot of questions how to interface with other groups? And again, we’re not going to fork anything. We are going to work within the process of those other groups, whether it’s TC39 or WHATWG or some other w3c working group. It doesn’t matter. We will use this TC55 as a form of discussing and collecting requirements and go off and do the contributions with the other specs as they are being discussed. + +JSL: We already talked about conformance levels. We will have a number of these. First one and the primary one we will be focusing on initially is the minimum common. + +JSL: And again we are keeping everything royalty free. + +JSL: That’s the presentation. I wanted to go through that quickly. I wanted to make sure we have plenty of time with discussion questions if anyone has any concerns. I know there are a few folks here like LCA and ABO involved in this process. So happy if they have some comments or anything to add to this as well. + +NRO: I’m happy to hear your plan to have normative references to what are asked for example for the common API. I tried in the source maps and we struggled a little bit with saying those things were normative. So next I was planning for the certain spec to work through ECMA rules to be able to actually have normative references to WHATWG and happy to hear we see this in TC55. + +JSL: That is one of the key things that we will need to work through is how do we have those normative references for that and what are the requirements there. It’s one of the open questions and definitely something that is great to see. + +SYG: I’m missing a step in the reasoning here. I heard in the beginning that you are a CG in w3c and you can’t publish stuff normatively. My understanding of the substitute is do it via a WG. An example is WASM, where the CG hands it off to WG to stamp. Your reasoning CG can’t do this and we’re moving to TC in ECMA. I’m missing some of the middle part there. + +JSL: So we basically put it to a—not necessarily a vote, but just a consensus decision within the WinterCG members. Hey do we want to do this as a w3c working group or ECMA? And the majority folks came down on just saying, hey, they prefer the ECMA process, let’s pursue this. We could have gone either way. The ECMA committee is just the one we landed on that everyone is most comfortable working. + +LCA: There’s another part to this which is that when we initially started trying to figure out how to publish standards, one of the options we also looked at is keeping the community group in the w3c but also having a technical committee within ECMA to actually normatively standardize things. which unfortunately due to various policy reasons from within ECMA and the w3c was not possible. But, yeah, we were really trying to get to the point where we could have something that would work similar to the WASM group where we can have an open—relatively open discussion with relatively little requirements from people that want to join and then have a place to standardize. I think we have figured something out with the ECMA secretariat where the invited expert policy is lenient enough for us to enable us to do that within ECMA. + +AKI: I just want to add here in case it wasn’t clear, there’s not currently a working group in w3c to correspond to this community group. Regardless in order to publish something a new group needed to be chartered. + +SYG: Right. Thanks for that. Can I respond to this? Can you say more. I heard a little bit—I heard thing about why the participants prefer the ECMA working mode which is the invited expert thing. Were there others that you can share? + +LCA: We initially had—so we were initially also unclear about how exactly the WASM process actually works with the community group and working group because we got some conflicting information from folks at the w3c about like where standardization actually happens. And we did sort of more quickly get clarity from things within ECMA because we also had closer contact to folks within ECMA. Ultimately, it could have gone both ways. It just happened to work out such that we had more contacts with folks at ECMA and we within the group thought that this was the more convenient place for us to do this. + +WH: Can you say more about how the conformance test would work? + +JSL: For the minimum common API, the intent really is to just specify a subset of web platform tests. So basically just calling out which ones these run times are expected to pass, which ones are expected to fail, which ones—where variances and behavior may exist. So it really will just create a profile of the web platform tests to say here is the subset you have to be able to pass. That will be the conformance tests for the minimum common API. Other ones like if this committee does go off and produce a novel spec, then it would define a set of tests in the web platform test style whether or not it would be added to web platform tests or some other project. That remains to be seen and determined. We would define what those tests are for those particular new specs. + +CM: I heard lots of references to W3C and WHATWG and things that are explicitly server platforms but I wanted to check (and I suspect I know the answer) that TC53 and the work they’re doing is on the radar. Because I think there will be considerable overlap with a few of the things that are in the APIs that they’re specifying. + +LCA: I think we had a lot of discussion during the charging process also with you on figuring out how to like cleanly split between what TC55 and what TC53 does. For those unaware, TC53 is the technical committee that works on something very similar to TC55 but more focused on embedded devices and devices that may have more resource-constrained—more constrained resources. I think the overlap there is definitely exists, but I think there is a clear case to be made that sort of our devices are able to run full fledge web servers and things like that do not necessarily fit into TC53 scope whereas devices that have, for example, no asynchronous I/O don’t really fit into the scope of TC55. Surely going to be overlap, but I think they are sufficiently—like, the use case they’re sufficiently different. + +CM: All of that seems entirely valid to me. I just wanted to make sure that this was a coordination point that was consciously part of your process. Sounds like the answer is yes. I’m happy with that. + +JSL: We’ve been getting—part of the chartering process, we had the calls and reviewing the charter draft and went around and trying to figure out the right language in the charter to cover this. It’s like are they resource-constrained? Are the servers well resourced? We couldn’t figure out good wording. I would love if folks take a look at the charter draft and come up with better wording. We want to make sure a good clear line between 53 and 55. I also want to make sure there’s a really good open dialogue and collaboration going on between the two technical committees to make sure that we are at least driving towards consistency. + +MS: I know that Deno is involved and are Bun and node involved in the discussion of coming up with APIs. + +JSL: Deno for sure and node active contributors that are involved. Node as a project is too large and too diverse for any individual to speak on behalf of the project without getting the technical steering committee explicit listen on board. It’s the whole thing. But we have node contributors and core contributors who are involved. Bun folks have been involved in conversations. Probably not as much as I would have preferred. I’d like them to get more involved and more active in this. But we do have run times. I’m also representing workers and proffor developers there. We have quite a few. + +MS: Thank you. + +PFC: Another question where I suspect I might know the answer. But we talked yesterday about the annotation in web specs for exposing something in all environments, `Expose=*`. I’m wondering if you see the minimum common API as a superset of those things. + +JSL: It can be. I think we need to go back and look at this rule about whether it’s purely computational or not. If you look at the stuff in the minimum common right now, there are things like set time apps. There are things that wouldn’t be purely computational. But I definitely think there is some area of overlap there that we need to seriously look at and consider. I do think that TC55 would be a great venue to answer those questions which API exist in the ShadowRealm? This is a venue to discuss that. + +PFC: I’m hoping the overlap is the minimum common API is a pure superset. Of course setTimeout should be included in every server runtime even if they can’t be exposed in audio worklet or ShadowRealm or whatever. It’s the opposite that I would be wary of where – + +JSL: Lost you. Definitely to the point that you’re making there, absolutely. I definitely think that given the spreadsheet, I want to look at minimum common API with the rule of set of computational things and on the list anything missing from the minimum common that should not be there? + +ABO: I don’t think there are. I looked at this before like with the new update, I haven’t checked. But the intention is definitely that the minimum common API is a super set of `Exposed=*`. So I know there was some question about whether crypto should be part of the—web paper should be part of the set of the globally exposed set. And in terms of whether it should be exposed in AudioWorklets, because it’s supposed to be only in secured context or something like that. And it’s not clear how secure context works in the server side environments, but in any case, web crypto is I think an API that is it currently in the API? I’m not sure. I think it should be. + +JSL: We have discussed it. This is a really good question that I think T C5 5 should look at first is your exact point. What does secure context mean in the server environment like node and Deno and that kind of thing? The entire environment is secure and realm and that’s how we operate and we have these available and we don’t restrict these things. It would be nice to have a formal definition of that. It’s easier for us to address these questions moving forward. + +MAH: I should have put end of message. I had suggested the common API is a starting point when considering which APIs to include in shadow realm. I’m not surprised at all there’s significant overlap and they’re consistent. + +CDA: That’s it for the queue. + +JSL: So finish up a couple minutes early. Feel free to reach out. Definitely happy to get reviews on the charter as we go here. I don’t remember exactly when the meeting is for—when the charter will be looked at again. Next week or something like that. If you have any comments or feedback, let us know. + +### Speaker's Summary of Key Points + +JSL (summary): Just want to emphasize a desire to closely work with other groups like TC-39, TC-53, WHATWG, etc to work collaboratively as much as possible. In particular, I think we likely need to workshop some of the charter language that would differentiate more with TC-53's charter. + +## Stabilize to stage 1 + +Presenter: Mark Miller (MM) + +- [proposal](https://github.com/Agoric/proposal-stabilize) +- [slides](https://docs.google.com/presentation/d/1474EreKln5bErl-pMUUq2PnX5LRo2Z93jxxGBNbZmco/edit?usp=sharing) + +MM: I’m going to present and I would like to record the slide show and and turn it off with the questions. This would be permission for recording for public posting. Does anybody object? Recording the presentation itself with audio with public posting fine. + +DE: I support this. I want to ask to the anybody who is good with technical set ups that we offer this presenters in general. I think a lot of people put good work and I’m glad you’re setting this path. + +MM: So I’m proposing stabilize and other integrity traits. We have existing set of integrity levels in JavaScript as background frozen, sealed, and non-extensible and the arrow of this diagram represents “implies” and if frozen is sealed and non-extensible and up the chart is the integrity levels and the levels were to support high integrity programming and served the function rather well but there’s still some weaknesses we would like to address. On this diagram by the way on the left we have the functions that bring about the integrity levels and on the right we have the predicates that test an integrity level and in the middle is the name of the integrity level’s states the object can be in. And the bulk of the presentation will focus on the states. + +MM: So when considering introducing new features like the integrity levels, integrity traits I’m about to show, this raises the question about when should a new feature be considered an integrity trait? There’s several aspects of the existing integrity levels that we’re going to take to be defining of what it means for something to be an integrity level, which is that it’s a monotonic one-way switch, for example, once an object is frozen, it is always frozen. That it brings about stronger object invariants and better support higher integrity programming to make things more predictable, and that a proxy has an integrity level if and only if its target has the same integrity level. For example, a proxy is frozen if and only if its target is frozen and this if-and-only-if upholds the idea that the target is the entirety of the bookkeeping to keep track of whether the proxy should be considered to have that integrity level. + +MM: There’s also a distinction between the existing integrity levels that we will be carrying forward that is some integrity levels are explicit and some are emergent. What I mean by that is that non-extensible is an explicit integrity level because it’s a fundamental part of the semantic state of the object that has to be represented explicitly both in the spec and in any implementation, whether an object is non-extensible or not. And an object only comes to be non-extensible if explicitly made non-extensible. And sealed and frozen are emergent integrity levels in that they are defined by conjunction of other conditions and if the conjunction holds, then the object is considered sealed or frozen independent of how that conjunction came to be. So, for example, if I have an object that is non-extensible but has a single own configurable property, it is not sealed or frozen, but if I delete that property, then the object becomes both sealed and frozen because a sealed object is just a non-extensible object in which all properties are nonconfigurable, nonproperties are nonconfigurable and sealed is non-data properties are also nonwritable. And a particular reason why this distinction is important is that there’s only proxy traps for the explicit integrity levels. There’s an event extension trap that is extensible trap because that’s the fundamental state change that the proxy needs to be able to intervene in. There are no proxy traps that correspond to sealed or frozen. + +MM: So the way we got started on this journey is that we are doing hardened JavaScript and doing the shim for implementing hardened JavaScript and level has harden JavaScript directly, and hardened JavaScript is explicitly trying to support have harden for JavaScript and has an operation implemented as a library in the session, implementable as a library which is harden which is a transitive deep freeze, transitive by own property walk and inheritance walk. Walking up the inheritance chain and walking forward along all properties and applying the freeze operation to all objects that it encounters. We are not in this presentation proposing hardened as an integrity level or anything else. It’s just an example of the library that is proving to be useful. And the important point of it is that it tamperproofs an API surface by freezing each object at each step of the transitive walk. Hardened JS in addition hardens all the primordials. All the primordial objects, all of the built-in intrinsic objects that exist before any code starts running which is all hardened before code starts running, and the result is that these are the objects that are explicitly shared by all code running in the same realm, and by hardening them all before code starts running in the realm you’re in the position to isolate the effects of different portions of code from each other. And we’ve been doing that since eCMAScript 5 days under other names. + +MM: But we found that there are three weaknesses that we would like to address. So our first try was to address all three weaknesses with one additional stronger integrity level, which we’re calling “stable”. And the idea would be that the harden operation I preferred to would be changed so that instead of freezing the object at each step of the transitive walk, it instead stabilizes each object at every step of the transitive walk. And by addressing all three of these weaknesses, the stable integrity level would be strong enough. + +MM: However, in talking with SYG on a hallway conversation at the last plenary, we realized that a major motivating use for one of the changes that stable would introduce, one of the stronger invariants, would be extremely useful for the structs and Shared Structs proposal. I will get into the specifics of that. The key thing if the new feature is brought in only by the stable integrity level, and stable implies frozen, then it cannot be applied to structs cross realm not definite. It cannot be frozen. Unshared instructs can be frozen but they need to benefit from this feature even in their initial non-frozen state. They are generally objects that you for most purposes won’t want to freeze. Because they have properties that are mutable. But the key thing is that structs have a fixed shape implementation. In current JavaScript, there’s no way to do that compatibility with the language and the new feature that would have been introduced by stable would have enabled structs to have fixed shape, but only if the new feature could be applied to non-frozen objects. + +MM: So Jim Barksdale of Netscape is famously said only two ways to in his case make money in business. One is to bundle and the other is to unbundle. Let’s examine a full unbundling of the features of all of our integrity levels into separate explicit orthogonal as possible integrity levels. And now because these are in a graph, not a fully ordered hierarchy, we’re going to shift away from the term “levels” for all of these and just refer in general to integrity traits from now on rather than integrity levels. So with these fully unbundled into separate explicit traits, this gives us a good framework for talking about what each of the separate features would be that address the different weaknesses. + +MM: So fixed is the one that would enable structs to be fixed shape. Right now in JavaScript, there is this feature return override such that if, for example, a super class constructor ends by explicitly returning some value, then in the super call in the subclass constructor, following the super call, the this in the subclass is bound to the value that was returned by the superclass constructor. It is not bound to the object that was behind the scenes freshly made to be an instance of the class. And also at this point in the subclass constructor, takes control, the private fields pound value in this case are added to whatever object.return. That’s the case even if the object is frozen. So it’s possible to actually use this to build a WeakMap-like abstraction that this code example is extracted from. The proposal repo has a more complete code for an emulated weakMap that just used return override. And the key thing here is that if the subclass constructor is called with a struct object as the key and some value, then the language would obligate the implementation to add this private field to the struct. Now, the specification accounts for this semantics of how is it that these things can be added to frozen objects. It accounts for this by saying the private fields have a WeakMap-like semantics, but practically, all high-speed implementations we’re aware of, in particular all browser implementations we’re aware of, all actually add the private fields by a hidden shape change of the object. So in V8 are different shapes of objects have different hidden classes they call it, through the internal bookkeeping for keeping track of the shape. This would have to change the hidden class behind the struct. And this conflicts with a lot of the high performance goals that are motivating the structs proposal. + +MM: So the idea is that if an object is fixed, then it cannot be extended using this return override, it cannot be extended to have new private fields. And in fact, there’s a precedent for this already in the language which is, by special dispensation, the browser global window proxy object is already exempt from having private fields added to it. And this is again motivated by a different implementation constraints, but again it’s motivated by enabling the implementation to avoid having to do something complex in order to implement the feature that nobody actually cares about for that case anyway. So “retcon”, retroactive consistency and continuity, is a fanfiction practice of trying to retroactively rationalize something that had been a special case. If we introduce fixed, we also get to retcon the dispensation of window proxy and say instead the window proxy simply carries the fix integrity trait. And this solves another problem with the special dispensation that that the special dispensation on the window proxy, which is it’s impossible for the library to do a fully faithful emulation on the window proxy on the non-browser window platform because of the inability for that emulation to prohibit the addition of the private field. The introduction of the fixed trait would make that same exemption available to an emulated window proxy. + +MM: The next one is the overridable integrity trait, which would be an exemption from the assignment override mistake. So the assignment-override mistake is—I think the example explains it really well, ignoring the first object freeze line, the second two statements here. There’s a tremendous amount of legacy code on the web, particularly before the introduction of classes, that used this pattern in order to create class-like abstractions. So a function point that’s acting like a construction function, and then using this assignment to add a toString method to the `Point.prototype` that is inherit to object toString. What many projects have found is that in attempting to freeze the primordials in order to create a more defensible environment, for example, to inhibit prototype poisoning that they immediately break legacy code like this in that environment, because the assignment override mistake is that you cannot override by assignment a non-writable property that’s not inherited. So in particular, the object freeze makes the toString property on `Object.prototype` a nonwritable data property that therefore cannot be overridden on `Point.prototype` with assignment. The strict environment throws, and sloppy environment is worse and fails silently and the program proceeds to misbehave in weird ways. The idea here would be that if the object is made overridable, then in particular if the object prototype object in this case is made overridable, then its non-writable properties can be overridden by assignment in objects that inherit from the overridable object. So the parenthetical here is some people on the committee believe we might be able to fix the assignment override mistake globally for the language as a whole. I have no opinion one way or the other on this. I’d like to offline find about more of the evidence of pro and con. We’re just taking the position that if it could be fixed globally for the language as a whole rather than introduce an integrity trait, we would prefer that. And if that were to happen, we would remove the overridable trait from this proposal and just accept it as a global language fix. But if not, this is how we propose to fix it for objects that opt in to the fix by adopting this integrity trait. + +MM: When writing defensive programs, in particular programs that are defensive against possible misbehavior of their arguments, possible surprising arguments, it’s very nice to be able to do some up-front validation early in the function to validate that the arguments are well-behaved in the ways that the body of the function will then proceed to count on, to rely on. And a particular pervasive need for this is that many functions that are responsible from maintaining an invariant have to also momentarily suspend the invariant, do something, and then restore the invariant. While the invariant is suspended they’re in a delicate state. For example, a function that splices a doubly linked list, that must go through a moment in time where the doubly linked list is ill-formed before the doubly linked list comes to be well-formed again. And why it’s in this delicate state with suspended invariants, it is quite often vulnerable to re-entrancy hazards. So if code that was brought in by the argument could interleave surprisingly during an operation that you do while the invariant is suspended, then that interleaved code might re-enter foo. So “recordLike” here is named and inspired by the records and tuples proposal. If for example the validated suspect argument is JavaScript primitive data then within the delicate region we can operate on primitive data without any worry because primitive data we know does not observably transfer control to any other code brought in by the primitive data. Records and tuples would create object-like records which are still primitive data and still have this guarantee of no interleaving and therefore no worry about interleaving hazards. + +MM: What we’re proposing is that the one source of interleaving hazards that we cannot validate do not exist in the language as it is today is interleaving via proxy handler traps. And because even if recordLike to ensure that the object cannot interleave, checks the object is frozen and inherits only from something record-like and that it has no access or properties, all of that together does not give you safety if the object happens to be a proxy. So the idea is that if recordLike additionally checks that the object is non-trapping, then what that would mean is that if a non-trapping object is used as the target of a proxy, that no operation on the proxy traps to the handler, rather all operations on the proxy go directly to the target. To put it another way, the proxy acts exactly like the target in all ways except that the proxy and target continue to have separate object identity. And this simple way of specifying non-trapping, which is what we favor, is sensible if non-trapping implies frozen, so that the only objects you can make non-trapping are frozen objects, because the object invariance already enforce that if the target is frozen, that the only things the handlers can do is they can interleave other code during the access or they can throw, they cannot change the result of any of the proxy traps. + +MM: So is because the handlers are already mostly useless, for frozen objects, but certainly too late too make proxies on frozen targets non-trapping, the idea would be that this additional opt-in would make proxies on non-trapping frozen objects non-trapping and, therefore, not able to cause interleaving. And it does this while still not providing an ability in the language to test whether an object is a proxy or not. So it does not break practical membrane transparency, while still turning off this interleaving feature of the non-trapping proxies, and thereby mitigating the proxy reentrancy hazard. + +MM: As long as we are considering a full unbundling of integrity traits, we could additionally consider unbundling non-extensible into its two orthogonal components. And this would serve another retcon purpose. It’s already the case by special dispensation that the window proxy object, you cannot change what object it inherits from, and the `Object.prototype` object is born inherits from null and again by special dispensation, you cannot change what object it inherits from to something else, even when both objects are extensible, which they are certainly both born extensible. But nevertheless, they have this restriction. By making this an explicit integrity trait, then we can retcon window proxy and `Object.prototype` to account for this special behavior and again, we also enable higher fidelity emulations of the browser global window proxy object on non-browser platforms by making this selective prohibition on changing prototype available on objects that are otherwise extensible. + +MM: And then, finally, if we unbundle non-extensible in the two features, this is the other, by separating into a separate integrity trait, would allow one to make an object in which new properties could not be added, but the object itself you can still change what object it inherits from. + +MM: So this would be the maximally unbundled picture. The solid arrows are the implies. The question mark dotted arrows are maybe implied to be explored and discussed, it’s an open design issue. The only really compelling case for the dotted arrow is non-trapping implies frozen, it is actually possible to specify non-trapping if we relax that it implies frozen, but it is quite a complicated specification that probably is not worth the extra complexity and probably does not serve any actual purpose. + +MM: So there’s a problem with this full unbundling, which is that it has five orthogonal traits. In general, we like orthogonality, it’s more expressive, it’s more future-proof, with regards to a picture that accommodates future additions, more exclusively, but is it really worth ten new proxy traps to support these five traits? And in our opinion, the current opinion of the champions of the proposal, it is not. + +MM: So one way to solve this would be, instead of creating ten new traps, just create two new parameterized proxy traits that take an integrity trait name, protect, which brings about the integrity trait, and isProtected, which tests with the presence of the integrity trait. This raises a design question. It’s not necessarily fatal. It’s just an open design question for which we’ve have imagined answers, none of which we like, but all of which are coherent: as to how the new traps protect and isProtected could coexist with preventExtensions and isExtensible, since now, those would be existing traps that existing handler uses but now correspond to what is effectively a new emergent, a retroactively emergent integrity level. So when would an operation protect to trap extensions, versus when to have a trap(?) to protect non-extensible? + +MM: The other approach to the cost of having so many different integrity traits, and so many different explicit integrity traits, is to rebundle to the minimal picture that still addresses the issues that we find strongly motivating. So this would simply not unbundle non-extensible, and leave it as an explicit integrity trait, forgo the retcon of the permanent inheritance property of `Object.prototype` and window proxy. And also, fold both overrideable and non-trapping back up into stable. So basically, this picture is very much like the picture of our first try, with the only difference being that fixed is broken out as a separate trait. And in this minimal picture, we choose not to have any implication arrow from fix to any other trait, so that fixed can be applied by itself, and then retconning that aspect of the window proxy, since that is extensible, and therefore if fixed implied even non-extensible, you could not apply it to the window proxy. Altogether, just speaking from myself as one of the champions, I will say that I find this minimal picture to be the most attractive, even though it’s foregoing some of the benefits of the further unbundling, but any intermediate between the fully bundled and unbundled pictures this is proposed for Stage 1. So exploring the design space is certainly the appropriate exploration in Stage 1, not settling on a particular preference, necessarily going in. + +MM: And at this point, I will break for questions. But first, I will, as agreed, stop the recording. + +SYG: So thanks, MM. I support Stage 1. I need it for—a fixed for structs obviously, as you have said. I discussed this after our chat with other V8 folks, and in the spirit of simplicity, if possible, I think V8’s preference would be, if we can retcon non-extensible to imply fixed. If it’s web-compatible. To that end, we have added a use counter to check how many times we have seen in the wild how many times people are adding to global this. We mentioned this one reason for being able to explain the window proxy. If we unbundle fixes. But I want to raise our preferences. And I am not sure how—for us, at least, the ability to explain window proxy and to virtually window proxy is not a motivating or a compelling reason for it to be unbundled. So this is Stage 1. This is not a Stage 1 concern obviously. But I would like to raise it and get your thoughts. How compelling a motivation do you think the explanation of window proxy is, to keep fixed unbundled? + +MM: I think it’s not. I actually—this is the first I’ve heard of this particular suggestion, of having non-extensible imply fixed. By my immediate off the cuff reaction, you know, in ten seconds of thinking about it, I like it. The reason I refer to the return override mistake and assignment override mistake is, I consider both of those features of the language to largely be mistakes. And the assignment override case, very strongly so, because as far as I know, no one has ever seen production non-test code in the wild that purposely made use of the assignment override mistake. The return override mistake, to add private properties to preexisting objects, is certainly also very, very obscure. The use of it to create a WeakMap-like abstraction that I’m doing in the proposal repo is just there as a demonstration of the possibility. It’s not because I expect anybody to make use of that. So I don’t think I’ve seen a use of the return override mistake in production non-test code that was on purpose, where the object that was being extended was a preexisting object. It was one that was not created fresh during the class construction. If anybody does know such a counterexample, I would be very interested. + +SYG: That’s also our hunch. And in a few months, whenever this use counter hits stable with the larger population, we will have a better idea of how much in the wild use there actually is + +MM: I want to applaud you and applaud the V8 team and the Chrome team for deploying this use counter. This is an overinvoice to invest in doing the experiment. + +KG: Yeah. I do like this exploration. I think that the object model in JavaScript is a little bit confusing. As you say, things are bundled that don’t necessarily make sense to be bundled. I am happy going to Stage 1 for this proposal to continue exploring this space. I want to raise a concern which is that, I think, changes to the object model are very, very conceptually expensive for developers. Having more states that things can be in is at least potentially very expensive in terms of reasoning about the possible behaviors of code. So I am not convinced that all, or possibly any of this, is going to be worth doing, in terms of the benefits it brings versus the additional complexity. Which isn’t to say I don’t see the benefits. I would certainly like to redo the whole language to have more reasonable behavior. But tacking it on is not necessarily an improvement. I am concerned about the complexity, happy to continue exploring in Stage 1 + +MM: Good. Thank you. I share your reluctance. Obviously, I come down on the other side altogether, but that’s due to a difference in weighting of the inputs, but I certainly agree that the costs are real. I am curious, from an explanatory point of view, do you prefer this picture, the minimal picture, or the fully unbundled picture? + +KG: That’s a good question. I am not sure. I think I would have to sit with both of them for a while to have an opinion. + +MM: Okay. And I encourage, you know, everybody to ruminate on that, I would be very curious as we continue the exploration. It’s much more subjective to get people’s sense of how much of an explanatory burden it is. It’s very much more something that I just need people’s feedback of what they expect. + +KM: I also want to say that I think there’s a good chance this is—has a lot of implementation complexity in I guess the implementations just because I think a lot of the logic of frozen and stuff, there’s a long tail of security bugs, but I am not sure. We have to look more at implementation. Obviously not a Stage 1 blocker. + +MM: Thank you. And obviously, in doing the exploration, we get as much feedback as we can from existing implementations, high-speed implementations, that with the new degrees of freedom, might be painful given some of the existing optimizations. + +NRO: Yeah. Thanks, MM for already incorporating a lot of the feedback I gave. For context I was in a discussion with MM, where we discussed bundling and not bundling, and my recommendation is that unbundling, with slides with more things and arrows that look much more complex, is actually more simple to explain. With the reason being, if we bundle everything, you have to learn everything at the same time. And this is a very complex topic. And like, developers today struggle to know the difference between sealed and un-extensible, so there is just a label to learn the properties one by one, rather than having, like, understand three of them at the same time. So yeah. Like, I am happy to see both options are on the table. I hope that we can go ahead eventually with the unbundled version. + +MM: Great. Thank you. + +NRO: And my next point, which is very related to this. All of this work, and discussions can, very difficult to understand. And while we were reviewing proposals internally at Igalia, one suggestion we had was that even for terms that might seem obvious to us that participate in TG3, it could be great to have a glossary or explanation or pointers of what they mean in the proposal itself. Even terms like reentrancy, and things like that that don’t come up in most proposals. + +MM: Good, would you care to contribute some of that glossary writing? + +NRO: I guess I could start by giving a list of words that people can find complex and we can work from there. + +MM: Okay, that would be wonderful. Thank you. + +CDA: That’s it for the queue. + +MM: Okay. Any support for—I think I saw support for Stage 1 go past. Anybody wishes to explicitly voice support for Stage 1 and of course are there any objections? + +### Conclusion + +MM: Okay. So I see on the TCQ explicit support from SYG. Thank you. Weak support from JWK. Okay. I think I have Stage 1. + +CDA: Yeah. You also have support from Jordan. + +MM: Okay. Thank you. + +### Speaker's Summary of Key Points + +MM: There are a number of ways in which existing JavaScript fails to support client integrity programming well. The existing integrity levels have served us well as supporting high-integrity programming, but there’s extensions to the system of integrity levels that might be able to rest of the soft shortfalls and I degrees three particular motivating shortfalls to be the focus of the investigation, which is suppressing the return override mistake to enable fixed shape implementations of particular structs, the suppressing of the assignment override mistake, making it painless to freeze prototypes, and the introduction of non-trap to mitigate proxy reentrancy hazards. + +## Module Harmony: where we are + +Presenter: Nicolò Ribaudo (NRO) + +- [slides](https://docs.google.com/presentation/d/1V2-4Hj-HBVQwdphcJUsrbmbitOPBMSf3HhKSvhBk4d0/edit?usp=sharing) + +NRO: So hi, everybody. This is a summary/reintroduction/update of where we are with all the various module proposals. There is no normative, any normative discussion, any concrete request for this—for any specific proposal as part of this presentation. It’s more of a way to like set some common understanding for then the next presentations we will have about the specific proposals. + +NRO: So I presented this module harmony presentation, like, one year, one year and a half ago. And there have been some changes since then. Both about individual proposals, and how we generally see the area and how various proposals interact with each other. + +NRO: This was what I presented last time. We had this kind of dependency tree between concepts. With ModuleSource and ModuleInstance. At the root of the tree and then there were many other concepts depending on them. And we had this division in proposals. So we had this, like, blue proposal on the left introducing ModuleSources and source imports. We had these purple proposals in the middle that was introducing the module constructor with the hooks and like was giving a way to link to create modules from ModuleSources. And we had these module instance phase import that would let you import the module and would like some modifying statement and get a linked module object out of it, being the phase after import source. And this was interpreting the module expressions, giving you some syntax to get to this module objects. And then there were like various other proposals, depending on those. On the bottom left, we have deferred import evaluation are which didn’t have any dependency—like, on the rest. + +NRO: So our understanding of this has changed a little bit since last time. So, first of all, import attributes is Stage 4. So let’s say we don’t really need to worry about it anymore. The proposal is advanced, so we had the source phase proposal and this is stage 3. The semantics are finalized and implemented in browsers already. + +NRO: We now have a proposal, the phase import proposal, Stage 2. It’s on the agenda to go to stage 2.7 at this meeting, which introduces ModuleSources specifically for JavaScript Modules. And also, deferred import evaluation is now at Stage 2.7 and we also have an update about this proposal later at this meeting. + +NRO: We have a new concept, deferred/optional reexports. This was originally part of the deferred import proposal. However, roughly one year ailing, I think, we decided to, like, unbundle it from the proposal because they had like a larger like—add more semantics than the deferred import proposal. We wanted to focus on them one by one. + +NRO: Also, thanks to the work that GB put into this ESM phase proposal, we realized it’s possible to have module expressions and iterations to not depend anymore on the concept of the Module Instances. And instead, to just be some syntax for JavaScript ModuleSources. The ESM phase imports proposal is introducing some machinery to let you import ModuleSources by flowing the necessary metadata in some ways. And module expression, module iterations could just use the same machinery. So they are actually unblocked by the ESM phase imports proposal. + +NRO: Also, we used to think of module declarations, as expressions. Because there were like about a bunch of shared concepts that were defined as part of the model expression proposal and then module declarations could be built on top of that. But that’s not necessarily anymore because of the most shared proposals are already—had been introduced by the various import proposals. + +NRO: Also, we discussed last meeting, I believe, about static analysis for modules. This was originally part of the ESM phase imports proposal. So this JS module sources and modules were part of the same proposal as per request from last time this proposal was presented, it has now been removed from it. So now, the modules source static analysis will probably actually go together with the proposal that would introduce module loader hooks. So I marked them as depending on each other because we will probably need them at the same time. + +NRO: We are not discussing anymore about the ModuleInstance phase imports, mostly because the main case was to get the module to then create workers from it. And this is now solved by the ESM phase imports proposal. There are still some possible use cases for ModuleInstance imports, as part of module loader hooks and compartments. However, it’s not clear whether it’s needed or whether ModuleSources plus some, like, constructor to wrap them is enough. + +NRO: And finally, we have a new potential proposal that’s on the bottom right of the slide, which is about sync dynamic imports. And GB will talk more about it later in this meeting. + +NRO: So we can divide the area into three main, like, clusters. One is the one where everything is ready to module sources. So if you want to focus just on this proposals, they are self-contained and they contain all the concepts necessary to understand all the other proposals in this cluster. So we have the source phase import proposal at the root and ESM phase imports is already building on top of that, and that—not only defines what ModuleSource objects for JavaScript are, but also separate for importing the JavaScript sources. So to continue the import process where if it was posed at the source face and working with WHATWG as part of web integration to create workers from these sources. And then module decorations can be built on top of these. + +NRO: Exactly what these are? Modules as defined today are composed of multiple parts. It has some source code. If it exists. For Modules, it doesn’t have this course sewed right now. It just has parts node. That is like a spec, the spec way of saying it. + +NRO: A module also has some metadata used are, for example, to resolve its dependencies. On the web specifically, this metadata includes the URL, and then you resolve from, like, URL of the module so you know where to resolve all the imports from. But the metadata can vary depending on the platform that is embedding JavaScript. After you start using the module, you start loading its dependencies. Each module has a list of resolve and like created modus operandis it depends on. It has some of the evaluation state. Like a module to be new, it could be linked with its dependencies, it could be—it could be awaiting or evaluated, either successfully or with some error. And a module is also exposed to its namespace object, and once the module starts evaluating, it progressively starts exposing the various exports from the module. + +NRO: The various model source proposals are cut into this list in two by saying, okay. We have some immutable data. And we call this the ModuleSource. And then there is some state. And the state is what is part of the full module. So the module source is the mutable subsection. Like, a subset of the information needed to create a module. + +NRO: As the way to get ModuleSources is, well, through the import source syntax that introduced by this Stage 3 import source proposal. There are other ways to get sources. For example, the `WebAssembly.Module` object can be explained as being a source. So there can also be APIs to get or create sources of specific module types. + +NRO: A source is a module lazy been loaded. It has—all of its dependencies have not been loaded yet. It’s been posed at one of its earliest spaces. With the ESM phase imports proposal, you can complete this evaluation to like actually load its dependencies and evaluate it to get it to the final state. + +NRO: The module declarations and expressions proposal would now give us a way to create this ModuleSources. Other than importing this. We can import them or declare one in line. So this proposal would not be introducing almost any new concept other than giving you syntax for some object that the language, through the proposals currently in the pipeline, already provides. Again we can define a source like this. This source would inherit some meta data from its parent. Such as the URL to then resolve the dependencies. And then you can import these sources exactly as you can import sources obtained to import source. And the loader would read this metadata and know what to do with the metadata together with the source to then actually progress in the module lifetime. + +NRO: This means also that maybe the module expressions and declarations proposal will change the keyword to say source instead of module. Module still I would say, looks nicer. But one of the blockers for this proposal was there conflict with TypeScript syntax. TypeScript was in the process of deprecating it, but it’s good to know we have a potential alternative in case it’s needed. + +NRO: There is also a proposal that is not part of module harmony, but we have been talking about in the context of module harmony, which is the structs proposal for the shared structs part. One of the challenges that the structs proposal needs to overcome is that if it wants to have prototypes for shared structs, it needs a way to tell whether shared structs whose definition is in two different threads is actually the same. The shared structure object is to get a thread, then it gets to the right local product type. One way that the proposal can solve this problem is by saying, okay. We now have the concept of ModuleSources. ModuleSources are immutable. So they are sharable. And we can actually explain the same module evaluated into two places, as being the same evaluation in the same ModuleSource. To share structs, definitions would be the same, if they actually come from the same underlying shared ModuleSource. + +NRO: And yeah. This is like a drawing of how different modules can point to the same struct. But we have been discussing this both in the initial structs, in the to see if this is actually a viable solution. + +NRO: We don’t have a second cluster, which is let’s call it the optional/sync evaluation cluster. This is about proposals that do not really affect how loading works or what a module is. They are here, but they just help us potentially skip some evaluation or like defer it. In this cluster we have the import deferred proposal. The deferred/optional re-exports, born as a child of import defer. And the new dynamic imports idea + +NRO: So to recap what the goal of the deferred import was, it was to evaluate as little as possible and only at the point where you need it. So that you don’t need to validate everything while it’s been loaded. But for code, you can evaluate later. While having less friction than what dynamic imports require. Export defer was born as a consequence of this. But we noticed that export defer can make the language support built-in tree-shaking, that is, if I re-export binding and my importer is not using the binding, I can avoid loading the module where that binding is exported from. This is something that is very common in tools. And this is one of the reasons why tools are better than just using browsers. Other than loading 100 separate files, they also remove a lot of that code. Different tools have different implementations of tree-shaking. There is no shared standard in how to do it. So having this tree-shaking in the language might help significantly. + +NRO: And yeah, while the import deferred proposal was Stage 2, this was left behind and we just defer this so Stage 2.7. + +NRO: Sync dynamic imports are in the same cluster because they are something in between them, I am import defer, in again, GB will talk more about this, but the general idea is that sometimes it’s actually possible to do a sync import in some sense, that allows to keep what more import defer does. It has little friction as import defer. But unfortunately it only works in some cases. There are similar concepts in other parts of the ecosystem, like in Node.js you can require ESM and synchronously load these files and evaluate them. And it’s now also exploring these on `import.meta` for convenience. Again we have more from GB about this + +NRO: And lastly, we have the custom loaders and compartment cluster, which includes all the tools to virtualize a module system. So the tools that will let you define how resolution works, without using a Node.js specific hook or a browser-specific implementation, but having a standard way of doing standard work across all platforms. And it allows you, for example, to implement hot reloading of Modules at language level and anything that is currently just—these proposals allow you to find your own type of modules and create some separation between different module graphs. When Modules get implemented or not. + +NRO: We’ve received some feedback on these proposals since when they were first presented at the plenar, I think three years ago. At the time, we presented a new module constructor that gets a module source parameter and a series of hooks. The most important of which was the import hook and this import hook was, as you were linking the module, this import hook was called for every dependency. Getting the specified as a parameter and returning the loaded module as the return value. And this very closely resembles the existing host API for embedder spec. Some feedback was this might require too much back and forth between the engine and user code. And so there have been some discussions about making it more—let’s say, upfront imperative, as in you could like with the static analysis features, you would like get list dependencies and then manually link it for each module. + +NRO: But there has been not much progress in this overall, other than a few discussions. So yeah. If anybody wants to help with this, you are very welcome. I know there are some people that want to help in in. But the current module harmony time is not working well for them but we will try to fix this this the future. + +NRO: And this is where we are right now. I would be happy to answer any question. If GB is here, he will also be happy to answer questions, specifically about how the various proposals work together, or about proposals that are not being presented at this meeting. If you don’t have any questions, I hope this presentation will help you follow the next discussions about the specific proposals. + +NRO: If there are no questions, I have a question for the committee: I’ve been asked to give this presentation because it’s difficult to follow the whole model space. But I would love to have feedback on the format. Like, would have been better if this presentation was done in some other way? Should have been longer or shorter? Should it have been focussed on different proposals? If anybody has meta feedback like that, it’s welcome. + +KKL: Yeah. I wanted to expand a little bit on another point that appears to be an intersection of interests between module harmony and shared structs. One of the ideas that NRO has, that satisfies a constraint that I think is important for module harmony, is that there’s this open question of how shared structs, which are primitives, as values, are associated with their corresponding prototype instances. And in hardened JavaScript it’s important for us to be able to ensure that these prototype instances which are born mutable, can be frozen and isolated to a particular—we call them compartments. I think there’s an emerging concept of a cohort of instances of modules, that comes out of the primitives, and should be sunk lower into module harmony. And that NRO is proposing that there be a property that outpick token of what cohort it belongs to. If a ModuleSource, which is so he should with a ModuleInstance pass from one cohort to another, that it ensures that it gets a different instance and different instances of the implied shared struct prototypes. And I think this mechanism is growing in importance, and that I wanted to share that with you today. So that you can be prepared to hear about more in the future, especially those of us from the hardened JavaScript perspective. Probably we haven’t talked about it much yet + +NRO: Okay. Yeah. Just to understand what a cohort is, it’s equivalent to the module cache being different to get the source module imported twice from different methods. So the one to find the host and the idea is like we might need for instance to expose its identity in some way. And this would be part of this custom loaders cluster. Thanks, KKL. + +CDA: Circling back to Nicolo’s request about the feedback about the presentation. Was this helpful? Could it be—people prefer an update in modified form in some way? I think he would appreciate any feedback. + +NRO: I guess it’s also fine to send me a message in matrix if you have any feedback. + +## ECMA402 Status Updates + +Presenter: Ujjwal Sharma (USA) + +- [proposal](https://github.com/tc39/ecma402) +- [slides](https://hackmd.io/@ryzokuken/r1qXw2hQkx#/) + +USA: So yeah. Okay. Let me know, if there’s issues, but I will try to be quick with this, and these are quickly hacked together slides. Before I begin, I credit all the editorial work that I am talking to here is not on me, but on BAN. But BAN hasn’t been here. So okay. + +USA: 402 updates. Not much happened since the last meeting. One of the editorial changes that we did was by ABL. Basically in Intl number more mat constructor. There was incorrect markup for the notation variable. So this is not a big deal at all. Just some formatting issue that was fixed. Thanks, Anba as always being on top of these editorial things + +USA: And after that, there was a change by BAN to clarify CollapseNumberRange. So this is for some context an abstract operation that is used by NumberFormat for collapsing number ranges. Basically, let’s say that you had a small range, within some degree of error. That is closer to, it could be formatted from something, let’s say, in please don’t quote me on this, 1.99 to 2.01 to approximately 2 and things like that + +USA: So anyhow, CollapseNumberRange. It was clarified, more specifically, it can now add spacing characters. Is the reality because this is how LDML does things as well as how ICU implements it. This is just updating NumberFormat to improve things editorially. + +USA: And as you might know from the last meeting, `Intl.DurationFormat` is Stage 4. The editors will be working on making it part of the spec ASAP. And that’s it. Thank you. + +USA: Is there something on the queue? I don’t think so. No? + +CDA: Okay. Thank you, Ujjwal. + +## Immutable ArrayBuffer to stage 2 + +Presenter: Mark Miller (MM) + +- [proposal](https://github.com/tc39/proposal-immutable-arraybuffer) +- [slides](https://github.com/tc39/proposal-immutable-arraybuffer/blob/main/immu-arraybuffer-talks/immu-arrayBuffers-stage2.pdf) + +MM: I will ask the committee again for permission to record the presentation itself, including the audio with the understanding that when we shift to QA at the end, I turn off recording. But any discussion during the presentation would be default be part of the audio recording for public posting. Any objections? Thank you. + +MM: Last meeting, we got Immutable ArrayBuffer to Stage 1. So to recap, the gray here is the existing ArrayBuffer API and the proposal would add at least these two features to the existing API, a transfer to a mutable method that returns an ArrayBuffer that has the immutability flavor and then an immutable accessor that is true if exactly for those ArrayBuffers that have the immutable flavor and the immutable flavor would be with the detached and sizable flavors. The behavior of the mutable flavor of arrayBuffer is that it would—its immutable accessor would say true. It’s not detachable or not detached. It’s not detachable. It’s not resizable. It’s next length is the same as the byte length and as the methods, the slice method that is a query method would still work, would still be there, but the other methods which would cause a change including all the transfer methods would throw an error rather than do what is normally expected. + +MM: Status update: at the last plenary, the public comments were all positive, but I additionally got many private positive comments. I don’t recall receiving any negative comments or objections. So if anybody here did give me some negative feedback, please remind me. As of the last plenary, the spec text was already what I consider to be Stage 2 quality thanks to RG for that. And since the last plenary, Moddable has done a full implementation of the proposal. + +MM: As of last plenary there were some open questions. And I will now—which I will now go into and tell you what our preference is on the resolution of these open questions. But in each case ask for the feedback today from the committee. So the existing `transfer` and `transferToFixedLength` methods both have a length, an optional length parameter. The `transferToImmutable` as presented at the last plenary had no optional length parameter and the question is, should it have one? And there’s an argument from orthogonality in each direction. + +MM: The argument from orthogonality to omit the length parameter is that the composition of slice and `transferToImmutable`—or the combination of an existing `transfer` followed by `transferToImmutable`—already composes orthogonal issues changing the length and making something immutable and because it transfers, it would not interfere with being zero copy. And it just kind of keeps separate jobs being done by separate methods. + +MM: The argument from orthogonality for including the length parameter is that we have got three—we would have then three different transfer methods and each independently has a length parameter that can be present or absent and you would just have the orthogonal combination of whatever the method does and whatever you ask for in the parameter. And so I think orthogonality is a wash. + +MM: I’m advocating now, I’m changing my mind on this. I’m advocating now that we include the length parameter because it minimizes the damage from surprise. What I mean by that is that either decision might surprise some programmers. A programmer that expects that there is no optional length parameter and doesn’t use it in a language in which there is an optional length parameter experiences no damaging surprise. A programmer who does expect there is an optional length parameter in a language that does not have one might provide an optional length argument and then they don’t even get an error, the language just proceeds to then do something that deviates from their expectation and solemn deviation from programmer expectation is very dangerous. + +MM: On those grounds, I now favor the length parameter. Should we add a zero copy slice method? Right now, we have got slice and transfer to immutable and they can be composed to get an immutable slice. But in the example code down here, if we have an immutable buffer and want an immutable slice into the buffer, we can just take the slice and transfer to immutable on the slice. But this technique for getting the effect is very hard to make zero copy. + +MM: So the proposal would be to add a new method sliced-to-immutable whose semantics is exactly the same as this line of code that you see down here but with the implementation expectation that the new ArrayBuffer is a zero copy window into the original ArrayBuffer. NRO, I think it was, raised the issue about whether the accessor property for determining the flavor of an ArrayBuffer should be named mutable or immutable. In general, there’s a principle that boolean should have positive names so that the negation of the bouillon does not read like a double negative. If we said the accessor was immutable, then in order to say if mutable, you would have to say if not immutable which just seems much more complicated than saying if mutable. The contrary argument for immutable is that there’s a general convention of booleans defaulting to `false` and in particular the really nice thing about that absence is false-y. So if buffer immutable is run on the system before immutable arrayBuffers on the language without the accessor would do the same thing. It would be false-y, indicating, correctly, that the buffer in question is mutable. Both are reasonable pros and cons. + +MM: All together, I’ll just again speak for myself rather than champions but I favor immutable as the answer because of the compatibility with absence I find compelling. And then there’s this complex set of open questions, all of which are about what the precise order of operations should be in the specification. And in the happy path, when everything just does what it’s supposed to do, this doesn’t matter very much. The consequence of the end of the happy path is pretty much the same. where the order of operations matters and where some of those other questions also explicitly matter is when you’re not on the happy path, the most important issue is: Does the failure cause a throw, or does it fail silently, doing nothing? + +MM: There’s an unpleasant precedent in the existing ArrayBuffer system standard that we need to live with as we resolve this issue, which is some of the things that you would expect to throw already in the language, such as reading a field of a detached ArrayBuffer or setting a field of a detached ArrayBuffer, instead fail silently. There’s a long history about why that is. ArrayBuffers are trying to get grandfathered-in language. Something that was a de facto standard that was that the de jure standard needed to be compatible. However we got there, we’re there. So we can’t change those cases. + +MM: So all together, our position is that especially for other subtler issues of, you know, observable consequences of order operations, overall we want to drive the answer to all of these questions by implementer feedback. because if it’s easy for an implementation to implement something that follows one particular order of operations and not others, that probably is the dominant issue rather than any semantic issue. However, there is a semantic bias that I certainly want to inject in that exploration, which is: when in doubt, throw. So the moddable access implementation, if you assign to an in-range field, i.e., a field that is an indexed property of the ArrayBuffer, rather assign the fields to rather and assign to fields of TypedArrays—such that when you want two ArrayBuffers, that if you sign to an index field, then it throws if you assign outside of the index field, then it does what it does now. + +MM: And the access implementation which is the only source of implementation feedback so far does do that, but moddable access implementation is not optimized for speed, it’s optimized for space and runability. So we still need feedback from the high speed engines. And that is it for the presentations and as agreed, I will stop recording. And throw it open for questions. + +JLS: The question is pretty straight forward. Instead of like a sliced immutable in an attempt to get the zero copy transfer, could ArrayBuffer just have a subarray not what we have on typedArray right now where it always – + +MM: Did it have a what? + +JLS: If it could just have subarray? Like, TypedArray right now has the slice which is copied and subarray which does not copy. If we had that also on ArrayBuffer. + +MM: I don’t know. It seems to be mixing just esthetically levels and seems less orthogonal to me. That’s just five seconds of thinking about it. I don’t have the strong reaction one way or the other. + +MAH: I understand James’ question, it seems that subarray is just—it seemed like a different proposal entirely. So I’m unsure how it is related to this proposal about immutable arrayBuffers. + +JLS: Well, the goal is just to get that zero-copy view of that. And where slice is created a copy subarray just gets you a view truncated. If you’re taking a subarray on it’s immutable and it will be affecting it more. + +SYG: My gut reaction is, no, we can’t do that. Because the way things are architected today is that ArrayBuffers are never windows. They’re never views. And TypedArrays are. So the consequence of that is that if you make ArrayBuffers also sometimes use ArrayBuffer work for some and may not work for others. Because there’s no reason to indirect—just the language level, there’s no reason to indirect the backing store ArrayBuffers today. Some implementations may not have that direction and insignificant to also make them indirected. + +JLS: That’s fair. + +MM: Make sure I understand, you’re saying no, not just to the subarray, but also to slice to immutable itself? + +JLS: That was going to be another question, but I’m after Mathieu in the queue. + +MM: So I am in favor of having a length parameter to transfer to immutable as it would avoid some refactoring hazard if someone has transfer today and want to get it transferred to immutable, they expect it to be resized during the operation. All of a sudden, they would end up with an ArrayBuffer that hasn’t been resized if it is—if that method is likely a length parameter. So in order for that I would prefer the length parameter. + +WH: I agree for the same reason. + +MM: As I said, I also prefer the length parameter. Does anybody wish to express a preference for omitting the length parameter? Okay. Great. In that case, I will consider that decided in favor of the length parameter. The length parameter by the way is already implemented in modable access engine, it is not yet reflected in the draft spec or in the shim. Both of those would be repaired. + +SYG: I was typing. I will just speak now. I have nothing against the length parameter. But I would like to point out if you have a length parameter using it, may perhaps break the expectation that transfer to immutable itself is zero copy. If you transfer it to a longer size, you would have to get that size somewhere. + +MM: Okay. That’s a good point. That’s a very good point. So actually, let’s stay with that point for a moment. If the source ArrayBuffer that you’re doing the transfer to immutable on is itself a resizable ArrayBuffer and the length is still within the max length of the resizable one, that would still give you a length expansion and immutability with zero copy all at the same time; is that correct? + +SYG: It depends. I would say like 95% of the case yes. If for whatever reason your language—not your language, your OS under the hood doesn’t have zero filled on demand pages, like, you might have—so the max length exists so that the OS can reserve virtual memory pages. + +MM: I see. + +SYG: Those are not backed by physical pages yet. When you get that, most of the OSs should support zero-filled on-demand and show up as zero. If it doesn’t for whatever reason, you might need to incur some costs to make sure that the new pages that get backed in actually show up as zero. + +MM: Good. That’s an implementation cost for the length parameter that I had—was completely unaware of. That’s good to know. + +SYG: Specifically my blind spot is windows. I really don’t understand the window VM subsystem. If someone does here, please speak up. + +MM: So are you okay with us proceeding assuming the length parameter that explicitly stating because of these issues, we desire more feedback from implementations? + +SYG: To be clear, I have no concern with the length parameter going forward. I’m just pointing out that if you care about the constraint that transfer to immutable would be always zero copy of performance expectation. + +MM: I see. I don’t care that it’s always zero copy. Well, I mean I care, but I don’t care more than I care to—about the reasons for the length parameter. If you’re okay with it, in that case, let me say are there any objections to adding the length parameter? I’m just considering that to be as of Stage 2 that I’m asking for the decision for now. Okay. So I will revise the spec and the shim and as I mentioned the access implementation already have the length parameter. + +RPR: I don’t think anyone disagrees. But let’s go to KG. + +KG: Yeah, I think it’s fine that it’s not zero-copy if you pass a larger length. Presumably if you pass a larger length, it’s because you needed that for some reason. It’s pretty weird thing to need on immutable because the extension is length zero. If you do need it, it’s not like you have a better option by composing some other operations and it might end up being free if there happens to be space to resize it. So I still think it’s the best you can do. It’s fine. + +RGN: In a similar vein, it’s also possible that newLength would be supported for truncation but throws for attempted expansion, making the restriction clear. + +MM: That would be coherent and I can see the argument for it. If no objection, I would like to stick with the decision that the length parameter works in both directions at the possible cost of not being zero copy on expansion. + +KG: Very mildly prefer to not throw. + +MM: Okay, good. + +RPR: Mark just to let you know, the time box is running out. You have about four minutes left. + +MM: Oh, okay. With one minute left, I would like to ask for Stage 2. + +SYG: I may have misunderstood. For slice to immutable I’m trying to understand two things. What is this a concrete use case? The use case I saw is nice to have this ability. I didn’t see the concrete use case. Two, what happens for slice to immutable from a mutable array? Like, it detaches the whole array and then gives you this one mutable window? + +MM: So, no—so the piece of code at the bottom, we stick with that equivalent. It just wouldn’t be zero copy in that case. If the source ArrayBuffer on the left here was a mutable arrayBuffer then the lice would make a genuine new mutable ArrayBuffer that was a copy of those contents from the original as of that moment and then transfer to immutable would take that one and make it immutable. + +SYG: But that’s a very different semantics because it detaches the copy. It doesn’t detach the original one. I can see use cases where you want – + +MM: It detaches the—slice to immutable does not detach the original in any case. + +SYG: But how can you make it zero copy if the original is mutable? + +MM: Sorry. It’s only a zero copy if the original is immutable. + +SYG: I see. Okay, I see. + +MAH: I think what it means here is that the spec would guarantee that when you do a slice to immutable on the immutable—on the source immutable ArrayBuffer, you end up having a zero-copy subset of it. + +MM: Exactly. And a use case for that is that right now a TypedArray or data view, you can ask it for the underlying ArrayBuffer and it gives you the whole thing. Well, maybe I want to create a TypedArray that does not reveal the entire contents of the original ArrayBuffer. This would enable me to let it reveal only a relevant subset by making it a TypedArray on the slice. And obviously that’s what would happen right now with just normal `slice`. But the normal `slice` does that at the cost of a copy. The only thing I’m focused on here is if the original is immutable but reveals too much, then you want one that reveals less without making the copy, this would let you do that. + +SYG: I see. Okay. I think I’m on the fence about this inclusion barring a concrete motivation. + +MM: Okay. Noted. Does anybody else have—does anybody have a strong opinion either way? + +MAH: It may have been a performance. We keep hearing that engines cannot optimize and do copy on write and things like that for ArrayBuffer. Here we have a particular opportunity to create a copy—a zero copy of an ArrayBuffer that is clearly—that can clearly be zero copy. But without this API, we’re back into the let’s be hopeful that maybe some day engines can actually optimize this by doing copy on write. + +MM: I have another motivating case for you. We want—you know, it’s not part of a TC39 ECMA script spec but in the larger ecosystem is immutable ArrayBuffers be transferred of structured clone and if you’re transferring it within the same agent cluster it’s a zero-copy copy. In in other words, the immutable ArrayBuffer exists in both locations without having copied the data. For that use, it’s certainly the case that one agent might want to transfer a subset of the data to another agent and not reveal the entire thing. And, again, it would be nice to be able to do that in a zero copy manner. + +RPR: So to your question, mark, earlier won’t slice to immutable, WH is in favor. + +SYG: I’m not asking who would like to slice to—I’m asking concrete use cases. + +RPR: Just also reminder we are basically at time now. + +MM: Okay. Can I have—it looks like the remaining—can I have a five minute extension? + +RPR: Five minutes is okay, yeah. + +MM: WH, can you answer SYG’s question, do you have a reason why you want slice to be immutable? + +WH: Just to allow an implementation, if it wants, to make this zero copy. It’s too hard to optimize it if it’s rolled out into a combination of slice and transferToImmutable. But I wouldn’t *mandate* sliceToImmutable be zero copy. + +MM: Okay, good. And that’s a good point about not mandating that it be zero copy, just allowing it. That’s a good point. + +WH: If it’s too hard to do the optimization, just expand it to slice and transferToImmutable. + +RPR: I’m not sure we have—I think WH was first in the queue with preferring no throwing. + +WH: That was before the comment queue got reversed. My comment about not throwing was regarding transferToImmutable. + +MM: I’m going to skip over this and go to JHD, then. + +JHD: Just wanted to concurring in every API built into the language and platform or not absent boolean should be the same as providing false and making the name stuck and with the name come up with a better name that works with the default. I very much support that. + +KG: I was a little bit too slow to get on the queue. The response to JHD, this wouldn’t be absent. This would be present ever. It is not present in older implementations. + +JHD: Is the accessor not an option? + +KG: Yeah. + +JHD: My statement doesn’t apply to the accessor. + +KG: Okay. + +JHD: But in terms of feature detecting and things like that, like, it’s still nice if absent on the prototype and then next release present on the prototype the false is the same value. + +MM: I’ll just take this as certainly at least not an objection to naming it immutable. + +JHD: Correct. + +SYG: This is about the throwing or no throwing behavior. I think the simplest thing for implementations, I can speak for myself but not for the other fast engines here, I have zero interest in working on this part of the code, because it’s like old and historical and all that stuff. And I think the simplest thing by far and there’s a lot of it. The simplest thing by far would be to align with whatever detached/out of bounds does for the particular case. And whatever that does, if it’s not possible to do that operation to an immutable ArrayBuffer, we just pretend it is detached/out of bounds. + +MM: Okay. So good. That’s implementer feedback pushing us in the other direction. Let me just verify with the committee that we don’t have to emerge from the decision to go to Stage 2, have to emerge with the stated preference on the resolution of that that the details of order of operations and when it throws is something that we can investigate during Stage 2. + +MM: So I would like to ask for Stage 2. First of all, anyone support Stage 2? + +WH: I do. + +MM: Thank you. + +NRO: Reasonable questions to have during Stage 2. + +MM: Great. Also support from JHD, thank you. + +RPR: And JLS. And CM. + +MM: Any objections? Great. I have Stage 2. Thank you. + +RPR: Thank you, MM. And then next up today, we have Nicolo with an update on import defer. Chris, are you ready to chair this one? + +## import defer updates + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-defer-import-eval/) +- [slides](https://docs.google.com/presentation/d/1yFbqn6px5rIwAVjBbXgrYgql1L90tKPTWZq2A5D6f5Q/) + +NRO: This is a follow up from the presentation we had last plenary where we went through two problems with the proposal but unfortunately the time did not have a concrete solution yet. So thanks everybody for the feedback during the plenary and now I’m proposing an actual concrete solution. + +NRO: The two problems we had, one is that significant aspect of the proposal is that we made all gets string keys on the trigger execution because that’s what tools can actually implement or at least easy to implement with a large group of tools. But that ended up not being enough. The second problem was that `import.defer` was the dynamic import that the proposal has was not actually deferring anything, but it was always triggering execution because it was internally the proper case from the object and triggering execution in the promise. + +NRO: So the reason why we made get a strong property pace trigger because for many tools, it’s not actually possible or reasonable to know what are the modules and when they actually start importing it. Normally I would like still care about tools, but maybe not this much. The reason I’m considering tools as so important for this proposal is because unfortunately is most module get transpiled or bundled and the experience that many developers have is not through the implementation on browsers but implementation of tools. + +NRO: There is still a need to the proposal to actually check the dependencies because you need to check that there is the weight or not. But this is just a binary piece of information that tools can more easily check at built time. For example, the the process would just fail. You can assume the deferred module has no—code in a different way to handle the delay. Again, at the time you can assume that the delay is already handled in the right way. + +NRO: So how tools can implement the proposal is in the general case is to basically wrap the main.js process in the proxy and then in the proxy when it is necessary trigger of the module. In many cases they would be able to module away (?) because the use of module is it’s not that dynamic. So it’s actually not too difficult to build time and form a static analysis. + +NRO: But in some cases when that’s not possible, the way to do it is through a proxy. This code here is in line and some sort of bundle. But tools for this code use a lot, for example, when transpiling in babel and targeting in the environments, it’s likely compiled to the synchronous import. So reading the key string trigger without it exports in this case foo or not. In string here because reading symbols does not trigger relation with the reason being you might want to check the `symbol.object` in a somewhat safe way. The key here is that this only depends, the way this works is whether it is triggered or not is only defending on the key itself. Before trigger evaluation the proxy and tool can check this type of string or not. + +NRO: So property access always triggers but there are other ways you can observe the contents of the module. We have getOwnPropertyDescriptor and `object.keys` and sync text and so on. The change I’m proposing here to actually make it possible for tools to more closely implement the semantics is that query any info depends on the contents of the module trigger. So before only any syntax or any function that would internally call the get internal operates from the objects this includes any get with the proper syntax and object case and object emergency (?) or properties or object and get and property descriptor with the spring that is one of the name of the experts of the module. Here in the slide is known and unknown is that the module is not actually exporting. + +NRO: The proposed change here to actually make it implementable with tools is that also anything that queries the list of keys exported by the module should trigger evaluation. So when we use the syntax with the string key, that should trigger and use get and property names should trigger evaluation and when you trigger property descriptor should trigger evaluation (?) sometimes. So it means spec-wise where efficiency specked it and get the list it will be if this is deferred. This is for symbol properties because we know that the module does not have an export with the symbol name even without including the contents of the module. So as I mentioned for tools, make it easier for the platform to nondeferred cases but still requires non-deferred analysis. + +NRO: Not just tools, there are other platforms to have synchronous modules that simplify the implementation of the loading as long as the platforms have some preview for example when pushing to the server whether it is syntax server or TLA and not imposable but keep around the list of exports for each module that could potentially be imported. It’s just a little bit simpler. + +NRO: The second problem that we had was that `import.defer` dynamic syntax always triggers that. It triggers evaluation. And the reason is `import.defer` is a promise resolved in the space and that’s how promises work. The then property. We never get the deferred module. + +NRO: We discussed two main options. One was to defer `import.defer` from the proposal. We can for now get the part in discuss how to do it in the future or hide the then property from deferred name space objects so that getting them from a namespace object would also be defined and the deferred object will never have a property regardless of what the module exports. This will be similar to the named property where we know that accessing them will return even without knowing the contents of the module. + +NRO: We want here to propose going ahead with the second option because if we remove `import.defer` now, it’s not like we can re-introduce it in the future. This is a problem about having promises with deferred space objects. It’s not specific to import.defer. And `import.defer` I would hope always one of these space objects that will not introduce the third type of object. It would not be possible to do this in the future. + +NRO: There are some use cases for `import.defer` dynamic form. It’s not the main motivation for the proposal. One of it is that you might want to have conditional loading in some place where async is allowed while still deferring execution. You might have at the top of the module a way that have different dependency depending on the environment and pay the execution up front. And also even though I guess more or less it’s for symmetry how other imports work. We have import declaration with the dynamic form and we have import source with the dynamic form and then just continue this pattern. So this is where we propose hiding them. So what does it mean exactly to hide them? As I said before, this deferred name space objects never have a then property. So according to the principle of when do we evaluate, we evaluate when we need to query the module. And reading or checking whether the nodule the namespace object has a property would not meet trigger evaluation. Even if `import.defer` that is a promise that results to deferred space object and the promise constructor or the promise resolution step with the property from the object still does not trigger evaluation which is exactly how symbol-named properties behave. + +NRO: So other things discussed last plenary and approaches to go forward with. But there has been other two minor changes suggested since the plenary that I would like to share with the committee. One is that with integration with logging utilities such as the built-in in console in logging JS and with the string object it’s common to look at the toString tag object. And while the deferred name space objects are meant to be drop in replacement for the name space objects, they have differences. Important that one triggers execution when used and other doesn’t. It was suggested a deferred module. The reason that the proposal uses module is mostly how it’s written. Not create a separate type of object in the spec. I just reused existing namespace objects by adding some conditions. The values of object internal methods. If I were to create a completely separate opposite spec I would have went with the separate toString at the beginning. This is some change I would actually like to see. I will see if there’s consensus for this. + +NRO: There’s been another suggestion coming from people think about how to integrate this with various loggers implementations. You have to know how much you can log. So a good logger is a logger that gives you as much useful information as the user wants. But that’s in a nonobservable way and not triggering any sort of effect. In platforms like JS, I’m thinking of JS because Chrome—browsers have more Interactive UI and I’m assuming much going on. In a logger, you would probably want to see the exports of the module if you can. Like all the values that it is exporting. And you need to know if it has been evaluated. What we the suggestion is to have the symbol dot evaluated and tell you whether it’s safe with the module or not. + +NRO: If this doesn’t happen, like, this is not strictly necessary assuming that the dev tools from the engine. This happens in the Node.js because they already have to check whether an object is a space object or not. And only JavaScript doesn’t run that. Node.JS uses the specific API. We can go to another one. + +NRO: And we are close to Stage 3 as far as I am concerned. We already have tested for the major semantics of the proposals so for everything that was not still open for discussion as part of this presentation. I started working on tests for the changes but I i don't have anything concrete yet because I don’t know yet in which direction we will go. We have a working in progress implementation to validate that the tests are correct. + +NRO: We are missing one thing that was Stage 2.7 was conditional on the spec editors reviewing the spec text. This would be a good time to do it. I will continue once all the changes caused by this presentations are merged. But yes, please, try to find some time for this. The idea is that I will come to the next meeting proposing Stage 3. To the queue now. + +GB: I just wanted to bring this point up again and thank you for explaining it so clearly in the presentation, but just to bring up the point again that the semantics that we’re changing here are in order to polyfillability and support and bundlers and tools to date. So my question is, how important is it for polyfills to have perfect semantics with the specification when in fact what we are doing here is we are creating trade offs in the specification itself that are not justified. All the use cases of the specification beyond—so when the polyfill is no longer needed? And in particular, there’s two risks that are being opened up by these changes. The one is that because it’s no longer a requirement of implementations apart from it just being a specification node for hosts, that the named exports are validated early, there’s no reason hosts couldn’t implement by no longer making the keys—the key list available at all, the list of names before the namespace is evaluated, there’s no requirement on implementations to have even validated the list of named exports. And so how do we know that hosts like Node.JS won’t decide to fully do this lazily and not do early validations at all since the only requirement is a spec? + +GB: And the other point is that we do lose a use case in this. Slide 7. So with the new evaluation triggers because you can’t check if a key is in the name space anymore, we lose the possible namespace where you could defer import something and still be able to do feature detections on the namespace and check if keys are available or not. And so that’s the context in which I’m asking the question about the importance of polyfillability as we expand these triggers. + +NRO: Okay. Yeah. Thanks, GB. With the main part of the comment about the spec requirements, so the spec still normally requires that mismatched exports or syntax errors are validated eagerly and the way to require that is that the errors are reported either in module loading when it comes to syntax errors because the load hook expects the result of the expect the parse module that parses the module and checks for syntax errors or linking when it comes tooling errors. With this proposal, linking is still happening eagerly. So I guess there is a potential conclusion if somebody doesn’t read the spec because they see I need no info to expose eagerly and defer everything. But the spec requires that some things happen eagerly. + +NRO: To the other point, I guess this is more about trade offs, what trade off we’re comfortable to make as a committee? I personally while I’m hoping and trying to work towards where we need less built tools especially—if we have built tools to make them as slight as possible and rely on the underlie engine as much as possible for example, with the proposal, tools don’t have to emulate semantics and just rely on the implementation. However, I don’t see where that is happening anytime soon. And there’s the reason why I’m pushing for trying to do what today’s tools can do. We’re talking about years here. + +NRO: Regarding the use case, yes, this is losing a use case. It’s like losing something that you would be otherwise able to do. I don’t know how common it is to do feature detection without actually using the library so that the feature detection so that—if you do feature detection and may not be able to use the library, it doesn’t matter whether it’s validating the first or second branch. If you go to the full matter (?) of something. This is true. And I guess it’s about trade offs. + +GB: Just to final point on the trade off question, you know, the question is there an alternative trade off space in which we accept some degree of polyfill semantic mismatch in in order to hold open future use cases and has there been any thought to that? This question took a lot of time. Maybe we can continue that discussion off line as well. + +JHD: So as one of the major polyfill authors in the ecosystem, it’s certainly more convenient for me when proposals are polyfillable, or when they’re made more polyfillable. I don’t think that’s a good thing to guide language design. I think that it is perfectly fine if a polyfill can only do best effort in many cases and I also think it’s perfectly fine if the polyfill has to be slower or bigger or harder to make as a result. It’s just the lot of a polyfill maintainer. + +JHD: There is often a correlation between something being more polyfillable and something being more consistent with the language or something being easier to implement and so on. And so it’s fine to use polyfillability as a test to surface those other possible issues, but I think it’s important that we use those other things as the motivation. And not polyfillability itself. And then, the—the second piece was about the host requirements. We have definitely already seen multiple examples of the spec saying something within an intention that is not mandated, and then we see implementations violating the spirit of the spec simply because the spec doesn’t prevent it. So it has been empirically valuable to tighten up wording in the spec and allow for use cases we like and do our best to disallow any use cases, you know, where like is tolerated proof of whatever. I am not trying to be paternalistic, but just… You know, we should restrict the things that we aren’t certain we need because we can loosen things more easily than tighten them. Yeah. That’s all. + +NRO: It’s not like ignoring a requirement in some host tool. Not implementing the hook here removes moving the steps from the algorithms and placing them somewhere else. It’s actually like I am talking the algorithms and not just some words around them. Quickly, first SYG, you were talking about the bundler and polyfills. But let’s get to the question now, unless GB has something + +JHD: Just to clarify, bundlers and transpilers are what we need to cover the syntax, but the things they transpile into would be a polyfill and that’s where the polyfillability would come into play. + +ACE: Yes. They—I can completely see where you’re coming from, GB. I am not—I wouldn’t—if Bloomberg uses a code import defer as a way just like a set of keys to do a set of detection, it feels like the—well, it—while that worked, it feels like the wrong way to go about it. Import defer is loading, like the whole dependency tree, the top-level await thing. It seems more important—it doesn’t give you exported keys from a module. But it feels like a use case would be better served at that layer of the module thing, rather than people using `import.defer` and reflect on keys. But I do see where you are coming from. + +GB: Thanks. Yeah. I just wanted to state those points. I understand the tradeoffs. I am just wondering how many exploration is done in the trade off spaces. But thanks for the responses. + +NRO: SYG? Was there a question or did you want to speak? + +SYG: You did answer it, but I would like to agree with you against GB here. I think—especially in the ESM space, because of the cost of the network, like, the—the dream of using ESMs outside of bundlers is a long ways off, if ever, from my perspective. So if there were any spaces currently that TC39 is looking at that really warrants favouring what the tools can do today, I think ESMs is it. + +GB: That makes a lot of sense. I guess it is a new perspective to me, having—you know, previously found the other direction in arguments. But also, just to sort of, you know, touch on what NRO mentioned module declarations would provide a path for bundlers in the future to future natively to the semantics. So the polyfills semantics we are designing around, if module declarations are successful, would no longer be constraints in the module harmony effort, as achieved module declarations. + +MAH: Yeah. I want to clarify that means the then export is never available, assuming—basically when you transform an import into an import.defer. + +NRO: Yes, that is correct. It is generally already considered to be a fishy product to have a then export due to how to interacts with dynamic import. But, yes, it would not—it would never be available on a deferred namespace regardless how you get to it. + +MAH: Yeah. I suspect a .then static import is never actually useful, so I—it’s strange that adding a defer would now be missing a namespace export. + +NRO: I agree. It’s an ugly solution. + +ACE: Missing then, we haven’t—I have assumed that tools like TypeScript would also reflect the missing .then in the type. Haven’t actually checked that with them, but it seems non-controversial to assume, and if someone wanted to get the then for some reason, the work around is creating another module that export * from everything to add a layer in direction and import defer that wrapper, so it’s—people could still do things in this space, but yeah. It is missing. So I hope the tools will catch on and if they do need it, they can put a work around it. + +KG: Yeah. This is on a topic that we haven’t talked at all about, which is Symbol.evaluated. I really like the capability and really do not like the proposed solution. I really don’t want a new well known symbol for this. I would be happier with just a new top level global function, like isDeferredModuleEvaluated or something. Also, I think that can easily be in a follow-up. Anything in this space can be a follow-up. So I'm happy to see it go forward without this, but want to register support for having this capability at some point. + +NRO: Okay. Thank you. MAH and then MM, which I guess will say something similar + +MAH: Same. I really like the ability to detect if a module has been evaluated, but it’s something where maybe the—the stabilized proposal, the non-trapping trades might be able to reflect that the fact it has been evaluated or not, and I will let Mark expand on the integration of that proposal. + +MM: Yeah. So I actually need to elaborate on one thing that I forgot to mention in the proposal. It is in the draft spec text, which is that for the non-trapping, it would not just be with regard to interleaving and reentrancy hazards, of proxies. But also with regard to exotic objects. That if an exotic object, on a data access, when access to a data property, exotic object is certainly allowed to observeably interleave user code during access to a data property, but that—to simply allow that creates the same reentrancy hazards, and this was also raised when import defer first happened. Which was the reentrancy hazards of data access causing inter leaving and possibly reentrancy. The non-trapping integrity trait in trying to prevent that would also need to say that if an exotic object does have that behavior, that it is not non-trapping. And then if you ask it—if you try to make it non-trapping, either it has to change its behavior, or to no longer to interleaving, or it has to refuse to become non-trapping, just like for other integrity traits that exotic objects don’t uphold. They have to either come to uphold it or have to refuse to acquire the integrity trait. For namespaces, the reason we were considering this new symbol or whatever the API is, is it really has to do with is there still a possibility of evaluation triggered by a property, a data property access, which is exactly the interleaving issue that non-trapping is about. It seems like the same two choices could apply: you could say that a defer—a deferred namespace, the namespace of the deferred module starts off without non-trapping. And if you ask it, if it’s not, if it’s non-trapping, it will say that it is not non-trapping. Sorry for the double negative again. But then if you try to make it non-trapping, it could either be refused or much more natural for import defer, is it could treat that as a trigger for evaluation. And then once evaluated, during that attempt to make it non-trapping, it would then return successfully from the request to make it non-trapping, and it will now be non-trapping because it is evaluated. So I was wondering what your reaction to that whole possible interaction between the proposals are? + +NRO: It seems reasonable. It seems like all, especially given that KG say this is good, but not in this shape and here you are offering a different shape for that, we should probably explore working in that direction. See, there is still Mathieu in the queue. MAH, I would ask you not to go to this because we’re short on minutes. And—but thank you. + +NRO: Okay. So okay. So we have had feedback in both directions. I would like to see consensus for some of the changes. This one seems to be the least controversial one about changing toStringTag on the deferred modules, say deferred modules—on the deferred namespaces to say the deferred module instead of just module. Do we have any concern with this? If not, I will go ahead and change it in the proposal. + +JHD: Well, not on the queue. So if they—because they are different, they’re—typically when they have a different thing, we provide some way to brand check it and determine it’s that different kind of thing. ToString tag alone does not achieve that. That helps debuggability. So I have no objection to the change there. Is there a way to determine that the given object is a deferred module namespace object? + +NRO: No. In general, there is no way to return whether an object is already a namespace object. It’s probably the only one missing a brand check. This proposal is not introducing that. As part of the proposals and the module harmony space, with new module constructor and that brand check will come, but it’s not been introduced for this proposal, especially given that normal namespace is already not brand checkable. + +JHD: Normal namespaces, I think—yeah. So you’re right. There were multiple things introduced in ES6 that failed to include a way to brand check in module namespace and after error, is error is the last one. But the only behavior I can think of for module namespace objects that is different from a frozen object is the live binding behaviour if you are exporting a let or a var and then you change it. + +NRO: They can also throw TDZ errors where you have some property access. + +JHD: Okay. Fair. So I am certainly not asking to introduce that brand check for regular namespace module objects, but it may be coming for both in the future, but there is a way that we could handle it right now, by having—doing the thing I wish all toString tags have done in the first place. Instead of being string data properties, being brand checking accessors that return a string. + +NRO: We have from some members of the committee, some requirement that all built-ins must be reachable, not just through syntax. So we would—the answer here is no. We need it exposed in some way as a property of some object in the global + +CDA: We are past time. + +NRO: Okay. But I will be happy to work with you on the brand check. It’s unfortunate so I am assuming we have consensus for this specific slide, even though nobody else voiced concerns? For the proposed change for adding hiding.then. Do we have consensus for this? + +NRO: Okay. Thank you. I am assuming silence means yes. I don’t see objections in the queue. + +WH: Are there any reasonable alternatives? + +NRO: The alternative discussed last time was not to have—never have the dynamic syntax or one extra [AL]ive (?) discussed last time was to have import.defer, namespace object, but object in a property point in the third… so that the promise will be resolved with another object. Which does not trigger this evaluation. But like all solutions present time being considered ugly, this one seems to be the least ugly. + +WH: Yup. + +NRO: Okay. So I am not going to ask for consensus on the last slide given the feedback received. We have had very mixed feedback on this one [slide 7]. My preference is to do it, but Chris could we do a temp check in a follow-up discussion at the end of the meeting? Because we have 2 and 2. + +CDA: We can add a continuation. + +NRO: Okay. And probably just like five minutes + +CDA: Sure. + +NRO: Thank you. + +CDA: All right. Thank you, Nicolo. Did you want to—well, I guess we have a continuation, so we can, I suppose, defer key points and summary to then. + +### Speaker's Summary of Key Points + +… + +## Error Stacks Structure for Stage 2 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-error-stacks) + +JHD: So a little history on this proposal. Since a long, long time ago, error stacks have been in implementations but not in the language. That’s unfortunate. A lot of folks want it specified in some way. And it’s currently not specified at all. + +JHD: So the—as of 2019, I think, was the last time that I actually brought this to plenary, although I think we discussed it briefly—I don’t know in a previous one or not, but 2019, I came back and the state of it at that time was that error.prototype.stack would be a normative optional accessor, the the hardened JavaScript cohort can remove it. And then it would be static methods, which given our—at the time our `system.getStack` and `system.getStackString` and our new position reflect, no longer reflect, no longer match. The dot get stack function is a static method that enables shadow that provides the same string that the stack accessor does. That’s how you get in a normative, not required way. There’s a get set string. And a get stack function that returns what most people who work with stacks beyond just looking at them actually want, which is structured metadata that you can traverse and work with. + +JHD: This is such a massive problem, so I was attempting only to do the structure and schema of the stack traces. And to not delve into the contents yet, the actual prose. That is. Larger to document and research and we don’t specify, you know, prose almost anywhere else, including error messages. It’s not entirely clear how we do it. + +JHD: And about so in the interest of sort of doing things in an iterative way, the proposal basically only provides these methods. And the structure of the stack trace which then—then it sort of retcons it into you build the string from the structure. It’s just the contents of the pieces of the structure will be, you know, implementation-defined or whatever the correct term is for that. Everything everybody does is correct. Browsers, for example, would have to add two new functions. And move some of the stack code that already exists to the accessor. Some already have an accessor. And they could take their string and reverse engineer it back into the structure if they wanted to start with the string, although the likelihood, they have a structure they used to generate the string and then that would be clean. + +JHD: So I presented this, you know, spec text, which I think I have updated to the modern approaches, but probably not all the way. It was up to date in 2019. And it has all the abstract operations that construct each of the little pieces and creates the stack frame objects. And, you know, provides—it doesn’t provide all the contents, but it certainly provides enough of the machinery that stack traces can’t get any worse in terms of structure and format. But the wording, it leaves the task of figuring out the wording to a follow-on proposal. + +JHD: And I got surprise feedback in 2019 during the Stage 2 advancement, that my recollection of it was that the work required to do stuff with stack traces was large, and as such, if it wasn’t going to be fully specified, we shouldn’t do it at all. And so I left that meeting discouraged, but trying to see if I would have the time to come up with the text or if anyone—any volunteers would show up to help. And in all the intervening time, no one has done it. This is a boiled ocean request. Then we don’t have stack traces in the language probably ever. So I talked to a few delegates and it was suggested I come back with no change to the proposal. In the agenda, there were error stack structures to try to change the problem state so it more accurately reflects the limited scope of the proposal. And hopefully maybe we can continue and get this advanced so that the work required to do the rest of the stack standardization is not so unreasonable. + +JHD: Yeah. So that’s essentially where I am at. I am hoping if there are additional constraints that I missed or misrepresented or unaware of, it would be great to hear about them. I would really prefer—because I have too much spec text and it matches to it to my upsing of the union of what engines do, I would love to go to Stage 2 with this and work ought any additional kinks and create the—create a smooth path for whatever wants to do a follow-up proposal for the actual prose, the text of the stack trace. + +JHD: I think we can go to the queue. + +MM: Okay. So as co-champion, I want to first apologize that I didn’t coordinate with you ahead of time, so that—I should have—the statement that reflects is a little bit problematic. Reflecting right now has only safe non-privileged things. Things that are completely safe to share among things which should have no privileged access or should not be able to communicate with each other. The reason why we made the accessor, error prototype stack optional is exactly is that it could be denied reflect—the reflect namespace object is not something that is in the category of things that we would want to be able to deny, rather in the category of things that we want to ensure that we never need to deny it. + +MM: If it did go into reflect, we could cope, but we would have could cope by giving each department its own separate copy of the reflect object that shared non-dangerous methods – + +JHD: Other. That’s already a consequence that I thought that we had accepted along with the getIntrinsic proposal, which is still at Stage 1 was planning to put that on. We discussed that last time. Yes, it adds the extra cost, but that’s like—like, tolerable. + +MM: Okay. + +JHD: Either way, I am happy to continue discussing that within Stage 2. The name of the global is perfectly fine to resolve in Stage 2. + +MM: Okay. Good. Thanks for reminding me of that. I had forgotten about that. This is a consequence of my coming to this section unprepared. Yes. It would be the same issue with getIntrinsics and have the same resolution. + +JHD: Right. + +MM: And that resolution might very well be that each compartment hits own and that to be tolerable. Right now, my shim, a very, very stale shim, but my old stale shim does produce a get stack by scraping the string, but we—it’s important to, very clear, you cannot do a conformant implementation by scraping the string because of essentially the equivalent of the dejection syntax. Whatever punctuation you are lag for and open paren in it, and if it has it, you are never going to scrape the stack string to produce the structured stack, unless we specify completely reversible escaping rules for the string and that would still take us away from existing implementations. I think that people would be less willing to do that. In any case, it’s certainly fine for shim implementations to ease the transition. Altogether, I am very, very glad you are reviving this. I would like to see it go to Stage 2. Especially if these issues or something that were all agreed or see solvable in Stage 2, but yeah. Very eager to see this proceed. + +WH: I am trying to understand the discussion about `Reflect`. I don’t see any mention of `Reflect` in the proposal. What did you mean by that? + +JHD: We are talking about the location on which the two functions get stack and get stack string are made available. The proposal has them on `System`. But Reflect is another alternative location that the discussions aren’t the getIntrinsic proposal and another one or two, those results in no longer being reflected to only matching proxy traps, which means it becomes a viable location for it. And so I just offhand mentioned it as another possibility. You can stick them on anything that meets the hardened JavaScript constraints and that would be fine. + +MM: I covered that one already. + +SYG: First, I would like to clarify that—okay. You did say that this has no change from what was presented in 2019. So you are really coming back and asking if the opinions of the parties who gave concerns have changed? + +JHD: Well, the—basically, yes. The two individuals who gave that feedback who believe were representing their own opinions have not prepared in plenary for four years. So I am hoping that if their concerns are not repeated by anyone else, given the additional time and the underlying value of an iterative approach to standardization, I am hoping we can decide it’s still worth advancing + +SYG: I am happy to reiterate Adam's question here, what does this make better? Like, if we—I am reading the notes from back then and it gives the structure. But the rebuttal to that was that we would needly—like, down stream of the structure, everything else is implementation-defined. You are speccing something that you immediately require more kind of engine branching to do anything useful with. And like that structure is not as far as I understand, like—I think it says it’s not even in the top ten of the difficulties of looking at stacks. In terms of what problems this solved, if there’s no delta. —procedurally, there’s a few concerns going for Stage 2. You are iterating stack frames out of the error data internal slot. I don’t understand what that does. + +JHD: Yeah. It’s certainly hand wavy. The error data slot in the spec is not used. And so I am putting stack frames in as a fictional concept. That is something I can – + +SYG: I just don’t understand that. + +JHD: Okay. + +SYG: But the—like, I think the higher order is what does this better? The problem I heard, you want to spec it. That is not a problem, to me. + +JHD: I will let Nicolo was on there for a minute with a use case. But in general – + +NRO: The use case—okay. In some libraries currently using the V8 API for basically this, and the way I use them is that for reasons libraries offer stack traces by users, and I basically reverse the stack traces to be nicer in some ways so that users can see their code on both sides of the library. My library is called by users and it goes back. And then I have some functions with special names that mark the entrance and the exit of my library code. And I start, like,—I remove the frames between those two entrances and exit and replace them with some fake frame. I have not looked enough at the proposal to tell how easy this would make it to do so. But right now it’s annoying. I can do it in Chrome and Chrome-based browsers because in other environments, it’s just the string stack so it’s not worth it. Even if I had to do some, like, I guess engine branching because they might represent a function in some way, I don’t know how exactly this proposal is doing. But if I had to do paraming, it’s better than to parse the stream by myself too. Yeah. This is like my personal use case for this. + +MAH: Yeah. In general, people have been wanting a consistent way to consume stack information that works across engines so you don't parse a stack string or implement custom engine approaches like V8s prepare stack string and so on. So I think this provides a basis for building that API for representing stack information, that people need and I think that also means, we’re going to need to specify some things that are currently not in the document specifically + +SYG: This doesn’t solve that problem completely. It solves it, it pushes down the—the string trace, parsing to be, like, frame or something. Instead of this entire giant string. It doesn’t actually solve the problem. You can perhaps reasonably think that is the starting point of solving that. But I am not at all convinced it is solvable. + +MM: So no, it doesn’t—there is no parsing of strings employed by this. An implementation of get stack—well, for example, my shim implementation of stack, because it’s on V8 because you have the internal structure stack objects, I use the structure tack trace object to produce the structure here, I am never parsing the string. So parsing a string, even within a frame is inherently unruly because because a function name might have an open paren in it, whatever punctuation you are using to do your part, a function can have that punctuation. So the only hope for getting accurate structured information is to not lose the structure in the first place. + +SYG: What does this spec say the function name must be? Like what does it say about that? + +MM: Well, it—I mean, I don’t—JHD, you can answer specifically, but – + +JHD: Yeah. So this—this relates to the conceptual fictional error stack frames in the [[ErrorData]] slot, which I completely agree is wildly unspecified and that would be the implementation defined thing. I have created the error data like a list of—where is it? Yeah. A list of records, and the records have fields and I pulled the field out and put them in a certain way. But the contents of those fields by and large are entirely unspecified. Right? The ones that are numbers, like line and column counts are specified to be numbers. And name is specified to be a string. But, like, what your name is – + +RPR: Can you show on the screen? This relates to the question of – + +JHD: Yeah. Sure. + +MM: Let me—this question about function names is a great example of this issue which is, in the language, functions have names. And perhaps there are two different things to reach for in the language to—with regard to the name of the function. But either one, the—is an arbitrary string that might have columns in and might have open parens in it in and the expectation certainly is that the function name appearing in this stack structure is a string we consider to be the string of—the—the string name of the function, that the consequence of not having to parse a stack trace string to recover the function name is that you are not going to confuse where is the function name stop and where does the source location URL begin? And that kind of safe parsing is a big deal. An avoidance of parse is gone a big deal. One of the ways which systems go very, very wrong is when they introduce little embedded text languages that need their own little embedded parsers, especially when they have no agreed escaping rules. And for punctuation and function names, there’s no agreed escaping rules so that punctuation is irreversible. + +SYG: That is fine, but, like, … this solve that? Does this—if I implement this, and Safari implements, this ring with go to implement something that will help you + +MM: If you implement this, then there is a get stack – + +SYG: This specifically meaning this spec draft + +MM: This spec draft provides a get stack operation, that provides the structured, you know, stack information, as big JSON structure + +SYG: It doesn’t. It gets a thing that we don’t know what it means from error data, which is an—it doesn’t tell me what to do at all. + +MM: Wait. Wait wait. I don’t understand it. + +JHD: SYG, let me clarify. You’re correct that I don’t tell you exactly what is in the error data slot in the spec at the moment. But you do something to come up with the string that dot stack produces. And I am assuming that comes from a structured that you have inside your implementation. + +SYG: That’s correct. + +JHD: And the feedback was to do the legwork for the champions to look at the different implementations and the different structures to come up with something here. Okay. So I’m sorry. What you are looking for—what you see in the requirement is, as dig into the actual code of the implementations, and try and understand the structures they are already using. And relate that to the error data internal slots, let’s say, or which could filter down to the rest of it + +SYG: No. You are having a detailed plan. It’s not my proposal. I am saying, if your—like, one, have a clear goal beyond I want structure. Like, structure—like the problem with the structure—trying to—the specific problem is that you have the stack strings that are hard to parse and people don’t want to parse it. It’s understandable. And structure to recover the structure without having to do the parsing + +JHD: Yes + +SYG: If that’s what you want, we have inter op problems that need to be designed so that, like, the thing that everyone implements, once this—Stage 2 and beyond is agreed to, you have something that you can ship a library that works beyond just V8, or just Safari or just Firefox, because – + +JHD: This already works for that. In other words, this already describes—this already is – + +SYG: It does not describe that because it doesn’t say what error data it’s – + +RPR: SYG, okay. There is structure here. I think—maybe NRO, do you want to ask your question directly? + +NRO: Yeah. So actually, I would have to see the spec, an object like some example. But I think you’re talking past each other. In the case of, for example, a function called var. And I throw an error in this function var, and I catch it and—as far as success, the name could just be full and not var. It’s through the structures that was specified. But nobody is guaranteeing the function name is going to be correct. + +SYG: Is it guaranteeing it must have a frame at all? Like what – + +MM: No. It’s not guaranteed. But the—I am wondering if I am misunderstanding your objection. Because it sounds like your objection is that the structure itself is—is unspecified and the structure itself essentially has a schema to it. Now, you know, there is—there is—you know, frames with function names, with line and column spans, with source—with, you know, source URLs, source, you know, indication strings. And, you know, then there’s an array of frames and it must—there’s a schema. And the—you know, the—it’s certainly unspecified what data it is that is used to populate the schema, but it is specified that the result of populating the schema is structured data that satisfies the schema. And that schema gives us for the first time an interoperable, it would give us for the first time, an interoperable accurate way to navigate the stack structure that’s produced in order to, you know, process it, to give feedback and all that. It does not mean that the contents of the schema will be interoperable from one thing to another. That’s underspecified. But, you know, this whole issue reminds me very much of iteration order for—for in loops. That we started with them being completely unspecified, we were never able to arrive at consensus to fully specify the order in which for in loops and array properties, but what we did is progressively narrowed the remaining degrees of freedom over time, over many years ever the committee, reconvening on this. But each step of reducing the remaining degrees of freedom led to greater interoperability and less danger of how it works on tests on some browsers and breaking on others. + +MM: I think one way that I—I would recommend looking at this proposal is interoperability on an agreed schema is a hell of a lot better than nothing. Especially if it can avoid, you know, necessarily unreliable parsing to produce the schema, and then with that, agreed, we can iterate in committee over time, as we discover what else we can agree on between implementations. If we require complete over implementations just to agree on a schema, that it’s basically a—you know, a formula for paralysis. We just want to move forward + +JHD: Empirically, that’s what happened. Here is an example where the content that this proposal does not specify are this part. Actually, this is the stack part. So this F, like obviously it comes from the function name. But that’s not in this proposal. The source here, that’s a URL for the place it comes from or something. That’s—that contents—those letters are not specified. But that string goes in that highlighted area. Then a colon and a number and a colon and a number. I have limited them to be positive integers and another line. The contents are specified here it doesn’t completely solve the problem, nor does the major of the proposal inside in plenary. What it does is it solve part of the problem and builds towards a better solution, where the amount of work that user-land has to do to solve problems is less or easier or faster or harder to mess up, et cetera. Or more secure, even. + +JHD: So there is an—there is a benefit here, a concrete benefit. If you are doing anything with stack traces, without looking at them with eyes to do some of the work—to obviate some of the work by using this proposal. + +SYG: So as it stands today, I always expose an empty array. Is that compliant? + +MM: Yes. + +SYG: Okay. Then how does this help you? + +JHD: Because in practice, you are not going to do that because you wanted to help your users. + +SYG: They are helped today with a non-standard thing unfortunately, but they are helped today. In other words, what you are – + +JHD: What you are designing, it’s the fact that it’s standardized, that is the value. It is the fact that it’s doing the useful thing that people want, consistently across implementations. Which means, any implementation that just ships an empty array for it where there’s a stack array, it won’t maintain its credibility. It it will maintain the version of the stack string. + +RPR: We have ten minutes left. I think we are spinning on the same point here. Could we move on the queue? There's a queue, a few more to go. All right. Thank you. + +WH: Going by the example on the screen, you say things like `f` and the URL are just implementation provided strings? + +JHD: Yeah + +WH: Can those contain things like closing parentheses? Is there anything that prevents implementations from putting characters that makes the whole thing unable to be parsed in there? + +JHD: Well, currently, there’s not, which is why we are actually… nobody has to do the string parsing. It doesn’t matter what characters are in those things. We certainly can and should in the future, lock this down to match what people are already doing or, you know, so that nobody can do crazy things. But currently, you can do whatever you want. And the – + +MM: Wait. Sorry. An URL can have a closing — a function name especially can have a closing parenthesis — + +JHD: It can + +MM: So I think—so, you know, it’s not that we’re going to reduce the punctuation allowed in the string. It’s specifically that providing the structure so that these—the structure can be examined without the burden of unreliably parsing things that might contain punctuation. + +JHD: We are not trying to make piercing easier, but eliminate the need to parse. + +WH: Thank you. + +NRO: Yeah. So we have recently discussed this proposal in relation to some other proposal in TG4. I quickly mentioned TG4 update, but there is a proposal that’s giving some ID to file and also storing in source map to actually connect files, like—like if you throw an error, you can report this so some logging service and somehow the currently have some—there’s a polyfill to get the idea of the error. Like, post hook, like record service, you can actually connect to the right source map. And the champions of the proposal, where we’re discussing how to expose these different errors, the idea was just as to have to through the non-standard error.captureStackTrace. Whatever no standard API V8 has for this, with the significant throw back that it would make it V8 only. In TG4 we got feedback. There’s a standardized way to getting to it. The ideas were to go threw some new globally new API in WHATWG, to get to the idea of a file. These are attached as a comment in the end. Or another idea, was that if this proposal has some movement, we could, then, other than the file income statement expose in the structured data which is the best way of exposing it. + +NRO: So if this is to move forward, the champion of that source maps related proposal, would be build different on top of this. + +MAG: Okay. So at the beginning, this was like, okay. This is going to be the union of real things. This should be the minimal capacity subset. But this spec is having at at the beginning of a stack frame. That’s not compatible. We don’t have that. Right? And like – + +JHD: Did you five years ago? + +MAG: I can tell, no, in the notes – + +JHD: That’s a bug in the spec. We should make that optional. + +MAG: Yeah. That’s fine. So, no. But like I think practically speaking, I see this being a very challenging thing to specify the contents of the string coming out of stack. Right? To the point that I actually think the better version of this would literally be to say, hey. Error stack exists. It has a string. That string has maybe a property or two. Maybe we can agree that everyone comes after a new line. Im not sure we can agree on that. + +MAG: But, like, this current design is a lot. And I will just—like, bundle my next point, is that this is multiple proposals and like I have differing amounts of appeal on different parts of this. Specifying that stacks exist, I think it is a good thing. We should talk about that. The idea to get a string representation of the execution context, even if you ever say that it is an implementation defined representation of the execution context, good. We can do that. We can say that, you know, maybe it’s not NXG (?), you can say that. It’s a regular thing. But, you know, trying to specify the actual format, I think is bad. I don’t think we can do it and frankly, I don’t think ever will ship it. It’s a whole mess of web compat. And then the stack getter thing, I am super interested in it. it’s a really interesting idea. I totally agree with the pain points. Yeah, having to, you know, I can imagine people like, for example, Sentry, probably would leave if if there's an automatic way to get a programmatic stack. Great. I totally imagine the use cases. But this proposal conflates all of the different pieces, and as a result, we have got this whole mess of conversation here. Right? And so, I just—I think the current proposal, no. It will be split in proposals both pushing? I absolutely think so. + +JHD: So just to make sure I am understanding correctly, you have see one of the proposals as this stack accessor itself, let’s say. And a different one to get the stack thing and the different one to get the structure? + +MAG: So I mean, I am skeptical about things. I think string representation probably always—the best you are going to get is a normative note that says, it’s implementation defined. I might be able to see that, say, if you have access to a programmatic way to be like, here is a—like object representation of a stack frame, I can see tools making use of this. Now, I am also terrified because it’s a huge interrupt problem and about I would argue, it should be specced such that his, you know, implementations are free to drop random frames so people stop depending on it. But like there—there is a design space I would see exploring there. But yeah. Strings, no. + +JHD: What you said about dropping and adding random frames, that’s allowable. If we ship this today, you can do that. This proposal isn’t attempting to close down that design space. And I think—I agree it could have an interop problem. But I am at Stage 1 looking for Stage 2. Like, that—that’s—it’s—but not fessly before Stage 2. + +MAH: Right. I just don’t agree that this proposal as a exists today is well scoped and motivated and it instead at least two maybe three proposals in a trench coat (?) at the moment. + +RPR: We have two minutes on the clock. There’s a bit more in the queue about this—whether to split into other proposals or not. I will point out, JHD, you are entitled to another 20 minutes. + +JHD: We can do a continuation tomorrow. + +RPR: But let’s try to close on time. + +JHD: All right. + +WH: I would be reluctant to split this into separate proposals simply because I want to understand the big picture of what is going on. The issue I have with separate proposals is, each of them would be missing the big picture. I want to know where we are going. I’m perfectly fine with not getting there, all the way, in one jump. But I want to see where we’re going, rather than considering things incrementally. + +NRO: So given how difficult it might be to standardize the stack string, would—right now, the spec doesn’t say there’s a stack property in errors. Would anybody be happy if we say, well there, is a stack property. So now we recognize that web reality fact, and we just say, it returns a string that is completely implementation defined? + +MM: Since you are asking if anybody would be unhappy, I will just say, yes. But because of limited time, I will postpone for when we resume. + +RPR: Okay. We are at time. Let’s try and get DLM in because DLM is the last person on the queue. Maybe we can squeeze this in here + +DLM: I will be brief. MM, you more or less raised and SYG too, I would have made the points basically, I see this as, you know, a source of potentially a huge amount of work, implementations to tie to converge our internal representations of error stacks, and I could see that as introducing web compatibility, interoperability problems and because of that, I am not convincing of that and from what is worth, that is like a blocking concern from SpiderMonkey. So feel free to request a continuation, but we’re not comfortable with advancing during continuation + +CDA: Are you unconvinced in Stage 2 for the shape of the solution, or is it a broader unconvinced on the stage 1 problem statement in general? + +DLM: I think we recognize there’s a problem here. We are and MAH, can say, this we are seeing web compatibility problems around errors and stacks. Yes we agree there’s a problem. We are not convinced of this particular solution. + +CDA: But is it still worth spending committee time to explore a different shape of this. + +DLM: Yeah. And I think we have heard opposition to it but MAH’s idea of splitting this up and prioritizing things that are causing us real web compatibility problems would be of interest at the moment. + +RPR: Okay. Thank you, Jordan. I guess we will wrap up now and you will get a continuation. + +DLM: Cool. Thank you. + +### Speaker's Summary of Key Points + +- List +- of +- things + +### Conclusion + +- planning to bring a new, smaller proposal with just the existing stack accessor +- as that advances, the existing proposal will "rebase" on top of that diff --git a/meetings/2024-12/december-04.md b/meetings/2024-12/december-04.md new file mode 100644 index 00000000..f14d2b6c --- /dev/null +++ b/meetings/2024-12/december-04.md @@ -0,0 +1,355 @@ +# 105th TC39 Meeting | 4th December 2024 + +----- + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Jack Works | JWK | Sujitech | +| Nicolò Ribaudo | NRO | Igalia | +| James M Snell | JLS | Cloudflare | +| Dmitry Makhnev | DJM | JetBrains | +| Gus Caplan | GCL | Deno Land | +| Jordan Harband | JHD | HeroDevs | +| Sergey Rubanov | SRV | Invited Expert | +| Michael Saboff | MLS | Apple | +| Samina Husain | SHN | Ecma International | +| Chris de Almeida | CDA | IBM | +| Keith Miller | KM | Apple | +| Istvan Sebestyen | IS | Ecma | +| Jesse Alama | JMN | Igalia | +| Eemeli Aro | EAO | Mozilla | +| Ron Buckton | RBN | Microsoft | +| Daniel Minor | DLM | Mozilla | + +## Import Sync discussion, request for Stage 1? + +Presenter: Guy Bedford (GB) + +- [proposal](https://github.com/guybedford/proposal-import-sync) +- [slides](https://docs.google.com/presentation/d/1GW_OCoVjd6OJi9BKSlQzQKqxrB0GUKHKFof4s3rn9yk/edit) + +GB: So we are starting with import sync today. And to give the background on this proposal, it was recently a PR on the Node.js project, #55730, for a import.meta.require function inside of ES modules. The only runtime today that supports a syntactical `require` inside of the ES modules is Bun and what makes this possible is that we have synchronous require of modules inside of Node.js and this I guess seemed like a useful feature to users to be able to have this requirement. On further investigation and discussion, we were able to determine that really the only reason this was desired since you can import CommonJS modules and require ES modules, the only feature that was really wanted out of this is to synchronously obtain a module. And so this is kind of like maybe could be thought of in some sense as like the last feature of CommonJS that Node.JS is struggling with. I wanted to bring it to the committee out of this discussion to make sure we’re having the discussion in TC39 because there’s a clear demand for some kind of synchronous ability to get access to a module. And there’s also possibly a risk that if TC39 were to consistently ignore this demand, that there could be ways in which platforms could continue to work around it and potentially create new importers which are basically going to be having different semantics to the ones that exist in TC39. + +GB: So what are the use cases here? A very simple one is when you want to get access to the node.JS modules or FS or any of those. Node.JS had an interface module to the built in modules to solve this use case. That’s clearly not the use case that’s in particular being tackled here although may be a cross platform version of it. Also synchronous conditional loading and then getting dependencies that have already been loaded. If a dependency has been imported and available, it’s available synchronously if you had the ability to check for it. So kind of like traditional `registry.get` use cases. And then there’s the sort of all in sync executor use case where there could be benefit in having a sync executor when we do module virtualization and also module instances and module expression and exploration. + +GB: And what is different about this conversation today versus in the past? One of the big changes that happened—another one of the big changes that happened recently in Node for the require refactoring is that module resolution is fully synchronous. This is now pretty well set in stone. That is a recent development. That was not true up until recently with Node.JS had a async hook pipeline and the ability to have asynchronous resolvers running on the separate thread and various asynchronous folks. All of that has been made synchronous now. In the process of being made fully synchronous now. In addition, browsers implemented fully synchronous resolvers which means that we can do the resolve part of an import sync fully synchronously, and we know that in reality of all of the platforms today, all of them use synchronous resolution, that was never a given. That’s one of the changes. Another change that I think is worth bringing up is that a sync import was never a possible discussion because it went against so many of the fundamental semantics of the module system working between browsers and Node.JS and the difficulty in bridging the module system between the very different environments. But now that the baseline asynchronous behaviors are fully baked and fully reliable and implemented and, for example, the Node.js module story has come together, it is possible to consider synchronous additions which don’t sacrifice the core semantics but can layer on top at this point. And `import defer` has actually already done that work. So the semantics that import defer defines are basically exactly the same semantics that you would want in many ways for a synchronous execution. And then also we think about what we want for virtualization use cases dynamic import is the thing we always talk about as the executor in virtualization and compartments where that would always require virtualization being async. Having a synchronous of import, a synchronous virtualization could be useful. + +GB: The design I’m proposing here, this is a proposal if someone thinks there is a better design and I tried to discuss a few. We shouldn’t move forward with this design, but this is the design I’m bringing forward for discussion today, which is just an `import.sync`. It would not be a phase. Sync is not phase. And semantics would be roughly what `import defer` has as of today. You would do the synchronous resolution throwing any resolution errors. If there’s already a registry module available providing that, if not doing host loading. There’s a question here about whether host loading should be included in import sync. So should we actually do the creation and compilation and substantiation of the module work inside of import sync that obviously something that the browsers won’t be able to do and convergence of behavior of modules and node and it can do the full pipeline and that can’t. And up to bundlists to bridge that. Effectively not available sync error and browsers could throw and Node.js could succeed, or TLA could throw, et cetera, to get to completion. It’s this new error that would be "not sync or not available synchronously", or some kind of error like that. Very similar to Node.JS does to require—when you require something using TLA, it will give you this error and start to try to load it and give this error and maybe leaves some stuff partially loaded or something. That says that you should use the async. We can have some kind of host error. Maybe host error, maybe fully TC39 error and we decide what error it is. + +GB: And then to explain how this could be useful in module expressions and declarations is that instead of using to use an async function to get instance for module expression, if you have a module expression available synchronously no reason you couldn’t synchronously evaluate the module. You have that. With module expressions you have everything available in the synchronous context. Maybe this justifies having a synchronous executor. For module declarations with the dependency graph with the module declaration you could synchronously execute the modules as long as not having top level await or third party dependencies. What if they have external dependencies? Trying to import sync with module exploration with external dependencies and in this example an outer import get that in the registry and get it to execution. In the module tries to load the same import specifying string it’s already in the registry and available. That can actually work. If you just bubble up all of the string specifiers to the upper scope and know they are executed, that will be fine. There is nice interaction with `import defer`. If you have deferred loading of something, it will be import unless in the cycle NRO reminds me. Imports defer readiness is exactly import sync readiness. And the question then is would it be worth considering for the import defer proposal some kind of name space deferred because in this example we never use the deferred name space, we want to be able to access it through other means. For example, it could have been in the nested module declaration. So with the namespace-free defer you guarantee the semantic, you guarantee everything and done all the work before you get here and import. And here is the example with module declarations. You would execute `name` and `lib` together late on the synchronous executor and the defer would have done all the upfront async loading. + +GB: That’s the semantics for `import.sync`. And to consider some alternatives of what else could be done, registry getter you could have just a plain `registry.get` with an import.meta.resolve. In general, the registry probably belongs contextually. So you probably want it to be `import.registry` or some kind of local thing. And then you probably do want to resolveGet. And so I guess my point here is that the ergonomic API you want for registry look ups is something that will be doing the resolution and being able to check the registry. So just thinking about different APIs that could be possible. This ends up with the semantic that’s very similar to `import.sync`. But overall, the alternative design space seems to include firstly do we want to do this divergence between node and browsers and node could maybe do more loading than browsers can? Or try to be stricter and say this is a very strict semantics and only import something that has very specific properties with that availability in the registry or do something like registry capabilities? I do generally think that the use cases here get most interesting when we think about the interactions with module expressions and module declarations and virtualization and registry APIs might not be suitable. But registry API exploration could be in the space of alternatives for exploration as well. + +GB: And then across the use cases, for other alternatives to sync module executors, maybe you have `.deferredNS` property on the module source or something like that on the instance. Maybe it’s some kind of function on the module source, for instance. Of course dependencies might have other solutions like conditionally calling `import.meta.resolve` or weak imports. We have the built in getter in node and sync conditional execution is kind of solved by import defer already. But it could be worth having discussions around. + +GB: And the risk is: would import sync be something that pollutes applications and people start creating much less analyzable applications or applications that have different semantics between browsers and Node? So bundlers could still analyze that and make it work. Doesn’t seem like that exists in the ecosystem. If we had this from day 1 it would have been a more attempting proposal. But today it seems like it would be hard for this to prove itself as more ergonomic than the static import system we have in place. + +GB: And then deadlocks a cycle of import syncs is a deadlock. This is already effectively possible with import defer, I believe. And so, yeah, that’s a risk. And then I mentioned there’s this browser-server divergence that kind of seems to come to that question do we want to say that all modules that are import synced must already be present in the registry in some way shape or form or description or allow it to do the loading and substantiation? Might be some ways to define that way to more closely match the defer model. Other weaker import semantics could be possible to explore. I will just end there. And hopefully we can get like a few minutes for the next presentation. I will go to the queue. + +NRO: With a cycle of executions, import defer would throw instead of deadlocking. + +GB: Thanks. We would probably have the same behavior, that makes sense. + +ACE: So the relation of import defer while import defer does define some aspect of this, it’s as you know, it’s crucially different to that it does split up the explicitness of when the async work can happen and the sync work can happen. It puts things in the spec that allows theoretical sync execution, they develop intention is not that, it must be synchronous. It allows browsers to load things asynchronously and allows top level await to still happen. I see the relation, I don’t think import defer gives us like a free pass of this to just naturally follow on. There’s still such a fundamental difference between the two. + +GB: If I can follow up on that briefly, yeah, I think there’s maybe like—there’s a bunch of loading phases and maybe it would help to put the phases of modules loading down and mark which belong to defer and which belong to asynchronous and which belong to import sync. I think there’s a lot of crossover so far as when you do an import defer, you are doing execution which is exactly what we want to do for import sync. And so far as there might be a model of `import.sync` that we want to specify which is that should not be weaker than the import defer used by defer. We could even say that we actually have exactly the same semantics in a strict version of this implementation and then there is the question about how much it should be weakened? + +ACE: So if `import.sync` is just executing the evaluation phase of the last part of import defer, that’s only going to work when the host can call the load callback synchronously or already in the – + +GB: It can’t call the load or callback. + +ACE: So then if we’re saying people need to ensure something else is adding something to the registry for this to work. A big issue we had trying to modernize our module system at bloomberg to be in the situation to do import defer is stopping code making assumptions about what other modules are being loaded. So there will be commenting saying this is safe because we know someone else has already loaded this. We really want—we have been trying to make everything statically and we have look ups and environment base and load this on the server and this on the test. We’re trying to make all of those things statically analyzable and limit the things that are relying on the interaction at a distance. + +GB: So within this proposal, there’s a kind of gradient from the very strict version that is defer level strictly. I don’t think we would get stricter than what defer is today. And then there’s maybe some slight weakening so we could say we are going to permit some host loading or something. But I think the strict definition that matches defer is very much a viable implementation. So far as you would actually ban any host loading. And that could be supported within the proposal as currently proposed. + +SYG: So I am concerned about the complexity of all the ESM stuff like adding sync back in particular , it concerns me. Spent a lot of effort with TLA to move the infrastructure in the spec to implementations to everything async. We will add another sync path that threads through everything that makes me very unhappy. Also from the browser perspective, the divergence problem concerns me. If we diverge that seems bad. If we don’t diverge, the value of the proposal seems much less motivated to me from the browser perspective certainly. If node is disallowed from synchronously loading then why would node want this is my question? + +GB: That’s a good question. Yeah, just to be clear, like, I personally have no desire to see import sync today. I am not looking to progress this proposal unless others want to progress it. I am presenting it because it is something that people are doing, it is something that there is a demonstrated use case around. And because there is this kind of risk that if we don’t show exploration of the space to solve use cases for users and demonstrate we’re interested in having those discussions and we shut down those discussions, we risk other risks. + +SYG: Okay. Well, then let my crankiness be noted in the notes. + +GB: Your crankiness is noted. + +CDA: Let the record show that SYG is cranky. Thank you. + +MM: Make sure to coordinate with XS importNow. + +GB: That’s definitely been an input into the design. We’ll follow up with some discussions. + +GB: So I’m not asking for Stage 1. What I’m asking for is if anyone thinks I should ask for stage 1 or if anyone thinks I should not ask for Stage 1? + +MM: I think you should ask for Stage 1. I think that Stage 1 is weak enough and that in terms of what it implies with regard to committee commitment and signal and I think that, you know, the issues you’ve raised are perfectly reasonable to explore in a Stage 1 exploration. And I would support Stage 1. + +JSL: Just to the point I’m not particularly happy with having a sync option for import either. But I also am very unhappy if node is disallowed to go off and do this on their own and no one else follows suit. If this exists in the ecosystem, I would rather it be part of the standard. Just on the point of view of someone who has to make their runtime compatible with node and other runtimes and that kind of thing. I don’t want to be chasing incompatible and noncompatible extensions to stay compatible with the ecosystem. While I absolutely sympathize with the concern ant adding sync back into this picture in the standard, I would rather it be done here rather than in node. If that makes sense. + +DE: I’m not sure whether this should go to Stage 1. This is very different from the design of eS modules generally and maybe we should be hesitant before giving this sort of positive signal to this direction. But I’m not blocking it. + +JHD: I mean, there are a lot of different desires around this stuff. Desire expressed on the node PR as I understand it is something about being able to statically determine import like points where new code is brought in. There are some folks who want synchronous imports. Some folks who want to be able to import JS without the type boilerplate. There’s some people who want the cJS algorithm in node. A lot of the use cases for sync imports that I see is something that the conditional imports proposal from many years ago, conditional static imports might have ? and worth looking into that. Simply being able to put a static import in a not top level scope position like allowing it to appear in blocks or ifs or things like that, that would I believe provide a similar—like, the same amount of staticness but would also—and the same amount of seeming synchronous—what’s the word I’m looking for? Apparently synchronous imports. And it may drastically remove the desire for a synchronous import. So I do think it is worth going to Stage 1. I think it’s worth exploring all these possibilities. But I mean the reality is that there might be some use cases that we can’t solve in ESM because of the decisions made ten years ago including doing it all asynchronously. If we can solve them, it is definitely better to solve them in TC39 than to have every individual engine or implementation deciding to make their own version. Do I think it’s worth continuing the discussion if only to avoid that risk. + +CDA: Noting we are almost out of time, SYG. + +SYG: Just to respond to James’ point earlier, if the thing we standardize is not good enough for the sync use cases on the server side that motivated the server run times to come up with their own nonstandard solutions in the first place, we will then just have another standardized thing that people don’t use and they will continue to use the not standard thing. I don’t think it’s a silver bullet to standardize something if it’s actually not good enough. We have to be pretty sure it’s good enough to replace the nonstandard solutions and I don’t see a path right now to that. While I don’t block Stage 1 because, you know, it’s exploration is what is needed here, I don’t see a path currently to Stage 2. So I want to be very clear about that. + +JSL: Just quickly responding, yeah, I agree. I agree with that. I just think it is something that we need to discuss. I don’t see a path to Stage 2 right this moment either. But let’s at least have it on the agenda for discussion, then. + +CDA: Okay. You also had some voices of support for asking for Stage 1 from KKL and JWK which i assume translates to also support for Stage 1 if you’re asking for it. + +GB: I’m going to suggest a framing, then, in that case, what if we say that there’s empathy for use cases in this space but there’s certainly not agreement on the shape of the solution and so this specific proposal for `import.sync` is not the thing that’s being proposed with Stage 1. What if it were instead Stage 1 for something in this space? And maybe I actually update to remove the exact API shape and say this is still an exploration. + +MM: I think with Stage 1, the general thing there’s a problem statement which is really what you’re asking Stage 1 for. But the concreteness of having some sketch of some possible API I think is always appreciated. But, yeah, the thing that’s Stage 1 is about is the explicit problem statement. I think this is, you know, a fine problem statement to explore in Stage 1. + +GB: So to be clear, what we’re asking for Stage 1 on in that case is not the proposed design, because that is not a Stage 1 decision. But in particular, exploring the sync imports use cases including optional dependencies and synchronous executions and explorations and conditional loading parts and built in modules as the problem statement. And under that framing, DE, would you be comfortable with Stage 1? + +NRO: Dan is no longer in the meeting. But I will note that he did say he would not block Stage 1. + +GB: In that case, I would like to ask for Stage 1. + +NRO: Stage 1 is about the problem not the solution, anyway. + +CDA: As MM said, it is not the strongest signal to actually land something in the language. Do we have consensus for Stage 1. I think you had support from JWK and KKL and MM. Any other voices of support or does anyone object to advancing to Stage 1? Hearing nothing and seeing nothing, you have Stage 1. Congratulations. We are a little bit past time. Do you want to dictate key points summary for the notes. + +### Speaker's Summary of Key Points + +- List +- of +- things + +### Conclusion + +- Presented a number of use cases where synchronous access to modules and their execution could be valuable +- While there were some reservations over exact semantics, there was overall interest from the committee in exploring the problem space under a Stage 1 process +- Stage 1 was requested and obtained + +Presented a number of use cases where synchronous access to modules and their execution could be valuable and would like to explore the problem space of these under a Stage 1 process. There were reservations about the import sync design, but we are going to explore the solution space further. + +## ESM phase imports for Stage 2.7 + +Presenter: Guy Bedford (GB) + +- [proposal](https://github.com/tc39/proposal-esm-phase-imports) +- [slides](https://docs.google.com/presentation/d/1qfnmqPkpuAqTv-1pll1Y6EkEHElf_58BtNBQSw9dpq8/edit#slide=id.g305421a9f36_0_11) + +GB: So in the last meeting, we presented an update on the source phase import proposal. I will just go through a very quick recap of where the proposal is today. So this is a follow on to the import source phase syntax proposal which defined an abstract module source representation for host defined source phases. But it did not provide a module source for JavaScript modules. So this proposal extends the previous source phase proposal to define in ECMA-262 a representation for a JS module source that represents a JavaScript source text module and also forms the primitive for module expressions and module declarations. + +GB: The feature is needed in order for it to fulfill the primitives required of module explorations and expressions and dynamic report and host postMessage as module harmony requirements. We’re specifying motivating this proposal on the new Worker() construction use case. So the motivating use case is the ability that the spec will immediately be able to satisfy is the ability to substantiate a worker from directly a source phase import. And this is something that provides tooling benefits, ergonomic benefits for users and enables portable for working substantiation and work across platforms. The module expressions use cases we’re going to be supporting ? the module expressions and message them to other environments. There are object values that get to dynamic import and support serialization and deserialization. The other update that we have from the last meeting is we formally had syntax analysis functions, the import function and export function and `import.meta` with top level await property. They were on abstractModuleSource and not on ModuleSource. These have since been removed because they were a secondary use case to the proposal and not part of the primary motivation. To be clear, this still remains a future goal. Because they are not suitable or in this position. But instead to just focus on the module source primitive for the specification. And these will likely to come back in instances of virtualization proposals in the future. + +GB: So when we got Stage 2, we identified certain questions we would need to further answer before we could seek stage progression. And these four questions. The big one for worker substantiation is can we actually do this across the different specifications? The source phase has implications in the WebAssembly and HTML and collaboration that has to happen between standards. Can we do that? Do those behaviors work across the specifications? We also identified early on that this module source as specified in the source phase sort of implies that you would have an immutable source record backing it. Generalizing the concept of a module which in turn requires generalizing the concept of the key to align with this. I got my numbers out here. I previously had number 4 higher up and switched it around. I shifted it there. But the concept of a compiled module record is number 4. And Number 2, the concept of generalized keying can work with that. Thinking about the problem of keying and thinking about if there should be some kind of compiled backing record. And so Number 2 is keying. Number 4 is spec refactoring and Number 3 is how does dynamic import behavior work for module sources across different contexts including across compartments and across realms and through serialization? So these were all individually big problems for us to investigate. We spent a lot of time from the module harmony meetings working through a lot of these requirements. So I will give an update on each of these. Cross-specification behaviors we presented at the HTML meet whatnot meeting at the 10th of October and explaining this proposal was at Stage 2 in TC39 and specifies this new source object, because source phase imports have already been merged into HTML, there was awareness of the source phase. This new Worker use case and its semantics and the transfer semantics represented. And there was genuine interest in the proposal. And no negative concerns were raised. It was not an explicit signal of intent or interest. But it was certainly a very unsafe signal positive experience if that makes sense. So, yeah, based on that, I put together a very draft HTML PR to work through some of the initial semantics and prove out the cross spec behaviors. And we worked through this. There are still some outstanding questions that we might well defer initially. We might say that the shared worker and Worklets are unsupported. We’ll probably default to some high secure settings for the cross origin instantiation and COOP integration and CSP integration. And then there’s another question on the HTML side about setting import maps for workers that comes up with resolution and the idea that there is a rough isomorphism for modules in different agents, which only works if you have the same resolution. One of the things we’re looking at there is import map, having good defaults for import maps in the working substantiation so that this worker substantiation would actually clone the import map in the pairing context and to do a best effort match of resolution across contexts. + +GB: So this is the HTML PR. There is no HTML PR right now. As a Stage 2 specification, we would like to seek Stage 2.7 to be able to work—to be able to put up the HTML PR and move that into a spec and implementation process. In addition, I presented the WebAssembly CG yesterday, a variation of these slides, and gave another update on the implications for the WebAssembly integration. Again, the overall feedback is interest and no negative concerns were raised. The second investigation was module keying. + +GB: So I want to just go through the semantics of how module keying works and it’s kind of like a key semantic when you support dynamic import of these module sources. How does this keying module work? This is something that we spent a significant amount of discussion time exploring and something we gave an update on at the last meeting. And so the semantics that we converged on here is an example of the module registry on the left. There’s the key and the instance. And note that the source is part of the key. So the key consists of URL and attributes and also the actual module source aspect, the sort of compiled source text and then the instance is the thing that you look up against those. So what happens when you import a source? If there is not an existing source in the registry, the source carries both the source and its underlying key, which is the URL and attributes. So when you import a source, it gets injected into the registry with that key end source and gets instantiated against that key and you get back that instance. If I later on import the string key with the matching attributes, I will also get back that same instance corresponding to that same source. What happens if I import a source that is a different compiled source via whatever means you transferred it from another agent and there was a file change in the meantime and so you had different responses on the network? That source in the other agent, but it has the same URL key in attributes as source C but it’s a different module source. This is one of the primary requirements that we identified for import sources. The module keying behavior is that if you import a source, you should always get an instance of the source that you imported. We discussed lots of variations here and discussed them at the last meeting. This is the semantic that we feel is crucial to maintain for this model to make sense. So what happens when there’s already that URL in the registry and a source, we add another entry into the registry against the new source and create a new instance for that. You get a new instance get the new source. So you get registry as far as the URL key and source match as part of the two aspects of the source key. And then the other case is what happens when you have an eval-like module. So you could think of if you evaluate a string containing a module expression, if you have kind of like in WebAssembly, it would be `WebAssembly.compile` and compile some random bytes. They are strongly linked and not to the original source URL key. It has a base URL but the source that you have was just its module constructor that would create these when you construct the module from sources, they are eval sources. They just have a unique ID. It has a source and the unique ID and when you import it, the URL key aspect is not the full URL key because that’s just a base URL key. In this case it’s actually this eval unique key combined with the source. So if you structure clone these things, they do reinstance because that key gets regenerated. + +GB: To summarize, the primary module key consists of the URL key and attributes or the unique eval ID for unrooted? modules. When we extend this module to module declarations and expressions, their key is parent relative. It’s basically the parent and an offset or something like that. In addition, there’s the secondary key which is the module source and it acts—it contains exact immutable source contents. We need to be able to do comparisons of module sources and define a new concrete method module sources equal which is able to do comparison of module sources between module records. We distinguish sources that are rooted to a key so the source phases records from those that are not eval-ish things with the unique eval ID. And as I mentioned, we define equality because you could have that case where we loaded two modules that have the different source with the same underlying URL key and we need to be able to detect that and add another entry in the registry. So they have the same keys, they coalesce, if they have separate keys they are separate entries in the registry. So that’s module keying. + +GB: The next investigation is what happens when you move module sources between agents? So if you have got different types of module sources and you transfer them between agents and dynamic import them, what kind of behaviors do we get? And this directly follows from the keying semantics described in the previous section. Here I have three types of modules. I have a module source that’s rooted to its absolute URL key and a local module declaration that’s contained in this parent module and I have an eval module that’s created by eval and could also be created by the module constructor. When I post message across, I send two copies of every module so I have two variations of each module. They are serialized and deserialized twice. Because we do the serialization and deserialization twice, the actual object that the module source object itself has is unique itself for each structured clone operation. That’s not the level at which this identity exists. Instead, when you import these objects, that’s where the keying identity comes into play. So if I have the source mode, it’s sorted with the URL key and source text and post it twice and the URL key and source text will match for dynamic imports so we get the same instance in namespace. Similarly, having done that, this module source is not present in the registry and just with the previous keying demonstration, if I import the URL key string, I’m going to get the same source. We maintain identity for module expression and module declaration based on the parent equivalently provided the parent isn’t self rooted. And the eval-ish modules on the other hand, every time you transfer them, that eval key effectively gets this key here effectively gets refreshed. So every time a structured clone is regenerated. There’s no concept of a global key. It’s all just serialization. So they aren’t equal. These are the proposed semantics and this is what is written up in the spec and host and variance. To explain the structured clone, you get the same behaviors there. The module and the string imports will be the same even though the objects aren’t the same. You have module declaration identity and the evallish modules get the eval ID created with the serialization and deserialization and they become unique instances. + +GB: The implications for WebAssembly is that it also gets the same behaviors. So the analogy of this, we don’t have module declarations or module expressions for WASM today and the module proposal that components became potter components does support the nested modules and you have something similar there. But in view of that not existing yet, we can sort of describe this in sort of the source phase for WASM and `WebAssembly.compile` is eval-ish. You post these things you get the same behavior source module for WebAssembly match the chronicle instance and refer to it with the string imports and that matches the cross-examined instance and one of the hard things here is you can already compile WebAssembly and post messages today. We have to be compatible with that semantic as well which we are as a quality. And if you have two different agents so that have different sources, so say for example one agent it had a module URL key with the source bar and on the other agent you have a module that happened to get a different source for that and post them both into agent 3, well, if you do the—if you import agent from agent 1 first is foo = bar. That will become your instance under that source. And so it will have equality. But we don’t get coalesce are qualities and source module two gets different instances. That’s from the course semantic you get the source you import and you don’t get the different source. So that’s the core principle for import source that it must provide canonical instances for the source provided. We updated this into the spec and allowing equality operation before sources by the ? source record and module equal concrete method and update import to run through the host load import module machinery to allow it to perform registry injection and when a record exists coalesce on I quality and import on a source must return instance of the same source and extension of the existing such that the same instance for a given source must be returned every time. If you transfer a module source from an iframe outside of the iframe and you have a module source from a different realm, today this will throw an error. It’s a one line specification because we weren’t sure if we should support it. This is purely a technical question or like an editor but not an architecture request. It’s something where we can remove the lines or add them. Seemed more conservative to add them initially. We could always remove them making what is an error not an error. And so this is something that could be discussed further as well. But it seems better to error on the side of caution without further discussion. And then the last investigation for Stage 2 was the refactoring of the source record. Today, we talk about sources and instances, everything is just a module record. So we talk about this module of importing the source and having the registry key by the source and matching to the instance, and creating instance undertaking against that but in reality everything is just the module record. So in the registry you have module records against the URL keys and when you import a module source, it just points to its module record. So the question here is there this refactoring and split up the source and instance? Should we be doing that? What happens when you import source and points to the module record? Well, we don’t inject the instance that you pass with. We inject the source, the instance gets injected because it effectively already existed in the registry. It’s kind of like sensing which like the module record already represents, it’s basically you can look up the module record like the registry entry. If you have the module record you already have the registry entry. If you have the source object, you basically have the constraint that you have to not rely on the instance data. So that’s the constraint on the source data. So, for example, if we had—sorry, here. This should already be in the registry here. If you import a module record that happens to have another instance on it, you’re going to get the canonical registry instance for that source, not the one that happens to be on the module record. So in the current spec design, we do actually specify these kind of almost like ghost instances that are unused where you’ll still just get that instance 3. So every time you structure clone a module source, you breed a new module record that has the sort of floating instance. But you still converge on the registry and instance. This is kind of the question of spec reality versus spec fiction. And an important part of the discussion. The argument is we maintain equivalence with the spec fiction because the import of a source is always the same canonical registry instance at the key in source. The only way to obtain an instance is through canonization. Only the canonical instances are sort of the ghost instances are fully inaccessible. That’s an invariant that we obtain. This is module records and abstract module records. So if we split them up, we split them all down the middle. But because we don’t have multiinstances today and there’s only one canonical instance of the source identity, we can maintain the invariance on the current module records to specify the necessary behaviors. Only when we get to multiinstances or module instance primitives with compartments that we need to start separating these things. + +GB: Since the key model is always consistent with the source instance separation. And the argument that we are making is that right now, today, it would be an increase in spec to make this refactoring. So, yeah. That’s the—those are our Stage 2 updates. For Stage 2.7, we have reviews from Nicolo, and Chris. And we also have from the editors, in that review process, some things came up. Kevin had a – + +GB: so KG, brought up a good point about possible refactoring of GetModuleSource. Because initially, in the—in the previous proposal, for source phase imports, that was only supporting WebAssembly source imports. We weren’t defining WebAssembly ModuleRecords in ECMA262 so we used a get concrete method to allow hosts to define their ModuleSource. But now, with ESM phase imports, we do this in internal slots. And we could ensure, I am writing a spec mode the host is maintained, that it’s always the same object. But now we have the field defined, we could actually go back to the source phase imports spec and upstream this new ModuleSource internal slot. As an alternative to the concrete method. Which would basically inspect those eagerly populate the ModuleSource JavaScript objects for all ModuleRecords, even if they don’t have a ModuleSource imports, which we would then expect hosts not to actually do for proponents that they shouldn’t allocate those objects that aren’t used, since maybe less than 10% would expose the sources. They would expect it to do it lazily. But it might be a spec to just define it as an object field. So that’s something that I didn’t want to change upstream in source phase import, in this presentation, but something to continue and determine if it’s a suitable refactoring. + +GB: So I want to take a break there. And open up to discussion on the design of the proposal and any questions? + +JHD: yeah. You mention something allowing the HTML PR. Why is Stage 2.7 a blocker? I put in for `Error.isError` in Stage 2. I marked it as a draft. + +GB: That's great to hear. Yeah. I think—it would help a lot. You know, it isn’t just HTML but WebAssembly as an integration. And spec and implementation through this prospect stuff they do go naturally together. I think also, seeking, like, if reviews on HTML are generally reviews to land and implement a feature, or reviews for implementation, I feel like if we want to see this feature shipping early next year, so that we can start to move forward with module declarations and module expressions late next year, that obtaining Stage 2.7 now instead of February will allow us to be able to see module expressions and declarations by 2026. And, you know, there’s still two stages left to that as well. So I—I think it’s interesting to hear how—what the requirements for Stage 2.7 are, or in terms of what the standards processes are for Stage 2.7 in this contexts and I think that’s a really interesting discussion. Maybe we can move more of that discussion to the last discussion on this. + +DM: yeah. I am happy to postpone this to another time as well. But similar to what Jordan was saying, I am wondering, there’s an ambiguity at what Stage we want to resolve cross-specification issues. After the ShadowRealm discussion earlier this week it sounds like we have this resolved before Stage 3, which means Stage 2.7 is a perfectly fine time to do that. It’s nice, as a committee, to make that part of our process so that we can remove this in the future. + +SYG: I prefer proposals where a majority of the proposal does depend on another spec, like HTML. Or the PRs and where the more equivalent stage of advancement and in lock step. I don’t like the pattern, we want to advance to a more mature stage to convince the other body to, like, look at it for real. I think it’s fine to tell the other body that, their interest in it, should be independently derived and that will feedback into 2.7 or 3. I don’t see why any particular standards body needs to move ahead of the other one. If HTML doesn’t have interest, that should directly feedback into stage advancement considerations here. + +GB: just so follow-up on that, I have written an HTML spec. Out of respect for the HTML authors, I did not post it up because I didn’t personally feel like it’s—like, HTML does not have a stage process like TC39 does. + +SYG: I think it’s fine for you to say that, like, there’s no more concerns on the TC39 side, aside from HTML folks being okay with this. And if that is—if HTML says, there’s no concerns from our side, aside from TC39 being okay with this, then we are both fine to advance. + +GB: that’s not the situation we are in here. + +SYG: I see. Okay. That sounds fine here. + +GB: so HTML did mention in the whatnot meeting that there could be a concept of an explicit signal of intent from HTML. And that there could be some process around this in the WHATWG meta issue. That could be something that TC39 could explore in this space for future proposals. We did not obtain that official intent because it’s never been done before. But it’s worth mentioning in this context. + +SYG: yeah. For the future, that would be a clear signal. + +NRO: yeah. Well, it’s already been said, but the problem is that asking web people to integration proposal for Stage 2 usually does not work because Stage 2 can mean like proposals can significantly change at Stage 2 so they usually prefer to wait. So I was not aware of this proposal, official signal, in some way. But we should, like, really work something like that for some of our proposals. + +MM: okay. So I have a minor question on the slides as presented so far. But firstly, the orientation question: this question’s discussion we are having right now, this is an intermediate step; is that correct? We are still going to have a response to deal with— + +GB: I will follow up with a compartments deep dive and a and then a process discussion before the advancement + +MM: great. I will postpone all of my major issues until then + +GB: great. KKL could I ask the same of you as well? + +MM: okay. Just the minor issue, which might, you know, which if you want to postpone also till then, it might be appropriate – + +GB: it’s been—if it’s about the designs as presented so far, this is the time for that design discussion. + +MM: okay. So the—you talked about coalescing, and that also was a phrase that Kris Kowal used in our private discussion last evening. And in both cases, it confuses me. Maybe it’s just a terminology issue. But coalescing, to me, sounds like there’s two things that already exist in separately and then made to be the same. And from everything I understood both last night and today, we are not talking about coalescing, if that’s what coalescing means; we are talking about using E information to look up an entry in a table and look up an existing value, rather than creating a new value. There were never two separate values that are then retroactively being made into one + +GB: okay. So I will—I would update this slide to demonstrate coalescing, but maybe I can actually just make some edits here. So in this case, you have got two separate agents that have the same ModuleSource to the same URL, but it has a different underlying content. So when I transfer the first module, I will get an instance of that source. When I transfer the second one, I will get an instance of the same source and in this case, there is no coalescing. If we instead had the same SourceText, so if both contain the SourceText, foo = bar, the SourceText equality is, as you say, part of the key so you would get the same instance, and this is what we mean by coalescing. Even though they are completely separate things, and they were structured and serialized and deserialized, their identity coalesces. And strictly speaking, it is just a key lookup. But I have been using the term coalescing for the source coalescing because the source is a secondary key, not a primary key. So it’s a secondary key coalescing + +MM: what do you mean, when you say secondary key? I think maybe I still don’t understand that. The pair together is the key. + +GB: yes. From a lookup perspective, you would look up the string key, and then you would check if the canonical source matches your canonical source ID. It’s like a primary and a secondary key. + +MM: is it – + +GB: maybe it’s a terminology thing. You say the lookup is for that key + +MM: okay. As long as it’s consistent with saying, lookup is for that key, the issue of how do you break up the overall key lookup seems like an implementation concern, not a semantic one + +GB: that might be a case of spec fiction versus spec reality. The spec model is that. Where the—you are effectively looking up the compound key. Because one key is defined in HTML and the other key is defined in ECMA262, it does end up being a two-part process + +MM I see. Okay. That was clarifying. Okay. I think I can postpone everything else I am concerned about + +GB: let’s follow up in the—after the compartments discussion. Were there any other questions on the queue? + +KKL: I just wanted to throw in that I propose the word coalesce might be the source of the confusion. I think, yeah. A way to describe this is that in transferring a ModuleSource from one agent to the other, the identities of the ModuleSource objects diverge, and when you import them, the identity of the corresponding ModuleInstance converges. Is that a good way to describe it? + +MM: not to me. It makes me even more confused. + +KKL: okay. Maybe—better luck next time then. Pray continue + +GB: I am sure there will be more on that topic compartment discussion, so we can have a more in-depth discussion shortly. + +GB: All right. So thank you, Chris, for the review. And thank you for getting it in swiftly. I appreciate you having taken the time yesterday. The—in that review, what came up was that there—there are a lot of compartments interactions here that have not been fleshed out by this proposal, and so what I am going to attempt to do here is some kind of rough working through what those compartments interactions might look like, for the sake of the compartments folks, to feel comfortable with the proposal. + +GB: Please do interject if you want to clarify or if I am going off track from compartments and working or trying to work. For folks not actively interested in compartments, this will be far too much detail. So my apologies in advance + +GB: If you can consider the compartments model today, before the ability to import ModuleSources, compartments moved to a model of module hooks and module instantiation. And that is compatible with source phase imports, so far as this can be instantiated with. Then you have import hooks. They call it the import hook and just pull a result from here. And about you can instantiate instances that have hooks and through the hooks model, be able to virtualize the module system. In this example, if you want b.js to resolve to a specific instance, I can implement that took and dynamic import is used as the compartment executor to execute the virtualization. And so in this example, we have got a static import to the local b.js and a dynamic import to it. Because module resolution is reified per ModuleInstance, the results look only runs once. It only runs once for the static import, and dynamic import and you get the same thing. We get this kind of idempotence put into properties of modules that the I am sport of the same specifier should return is maintained through the design. + +GB: It’s worth noting that that we – + +MM: I’m sorry, before you advance, can you go back to that slide and just stay on it for just a second. So I can observe something. + +KKL: Mark, the parents' instance argument is irrelevant. It doesn’t exist in the proposal, but also isn’t germane + +GB: my apologies, if I haven’t worked out my imagination of compartments as opposed to the compartments. I hope what I have written adapts to the compartment model + +KKL: I think so. + +MM: okay. Okay. I am fine. Go ahead. + +GB: So the proposed is a local idempotence and not a global idempotence. Because the URL key is not defined in ECMA262, we have no way of creating key quality and so it’s possible to break global key items easily, where you could return a different instance for a “././b.js” and a “./b.js” and you would actually get a different instance. And so you can violate global key item ID Epotence so far as traditional folks are concerned. It’s worth noting this is like an edge case of the model. That is quite similar to the one that dynamic import of source imports also exposes. + +GB: So what happens when we introduce the ability to import a ModuleSource? We have got the—we talk about looking up in the registry the canonical instance for this source key. But sources can exist across compartments. You can pass sources around, so how do we define the canonical instance? And the only way to do this is to introduce a compartment key that is associated with the instance doing the import. + +GB: So here is the concept of multicompartment registries. Where you have got two separate registries and we have got canonical instances. The instance has a [homo?] reference on it, C1 on the first compartment registry and C2 on the second compartment registry. When you import a source, in the spec fiction here, you put that source inside of the registry, and you create a canonical instance in that registry for that source. And you get the canonical instance of that source in that registry. In the other compartment, you will get the version of the canonical in that instance. Sources have canonical instances per compartment. + +GB: To illustrate that, if—what we would have to do is to create a ModuleInstance from some kind of compartment key. So in this case, we are going back to our compartment constructor that defines the hooks, and passing that key into the ModuleInstance constructor. When we do that, the instances are associated with the compartments and able to maintain this relation that when you call this import B function, which will import a source, that is, its own source, it gets back the same canonical instance against that compartment. And so we are able to create this spec relation by design that the import of a source for a key is the canonical instance of that key, which will be the same as the one that you would import normally through the module system. So we maintain the new invariance introduced by the source phase module through the compartment key. There are some questions about how exactly canonical instances should be defined for compartments. And this is something that could be explored more in the compartment design process, but to widen the field here, you could—canonical instances can be set ones for registry. So it could be something like part of the constructor does the constructor immediately set the canonical, you have some kind of canonical true option. Which means, okay. This is going to be a canonical instance and the registry versus a non-canonical sort of separate instance that exists outside of the normal canonicalization process. Or is it an operation directly on the compartments, where you can create a source instance relationship, if you do it twice for the same source, it throws because it’s already been done. + +MM: I’m sorry. I don’t understand non-canonical. The model is that—you know, the things we’re talking about have a key that’s looked up by the multicolumn quality that you talked about. And that the per-compartment registry, if you will, has a single value for a key, which says that there is a per compartment canonical instance for that key. First of all, does that correspond to what you call canonical and second of all what is the use case for non-canonical. Why is the concept there + +GB: when you construct a model instance, you can have multiinstances for the same source. Which are non-canonical. Because the canonical ones says, when I do an import of this source, this is the one I usually want. But you create other module instances against the same source that have different resolution behaviors within the same compartment. If we want to allow multiinstancing within the same compartment, we need to distinguish canonical versus non-canonical and the distinguisher; if it’s in the registry key. There could be a compartments model that doesn’t allow multiinstancing if we deprecate the module instance constructor. + +MM: I see. I see. It’s the coexistence of the ModuleInstance constructor and compartments that creates the question. yes. okay.good. I understand. Thank you. + +GB: great. So or a special canonical hook. If you bring up a source in the compartment that the compartment has never seen before, you could get a canonical that runs against the source. Based on the invariant it should return an instance that is an instance of the source and throw it. Alternative, have automatic. What you would expect. Which is when you do the first load of an instance or source, it creates the canonical instance automatically. So yeah. There is some—a little bit of design space there. So another point worth noting in the spec reality, where we have this ModuleRecord, that is both the source and an instance, already solves the canonicalization because the ghost instance can be just adopted in the compartment matches, and if not, we create a new instance. So there is—you need a compartment field on the ModuleRecord that you would carefully check when doing this adoption. + +GB: So what does that look like in reality? So if we created a compartment, and we just—I am calling it the Joe record design, but the last bullet point, when you do it automatically. When you import the instance, it’s now put that instance in because it was the first instance seen for the source, and it’s now canonical. Now it’s dynamic import. So if you now have this import external function, which takes external sources and takes a source B—sorry. This is source A. Apologies. So if it takes an external source and we pass back in the source amount, greater than instance for, the canonical instance is going to be—sorry that was a source B. My apologies. So if we import the source B, the—it adopts the source, creates a new canonical instance for it. If we later do an import later, at its canonical instance for the string, that will be the B. If we put in a source A, it’s the same instance for A. We have single canonical instancing here. So there’s effectively no separate instances in this model. Furthermore, the import hook would not necessarily need to be called. Because when you import the B, it’s already at the key for that ModuleSource potentially. So there’s some kind of, like—there’s some questions about resolve and import. And that sort of aligns with how much to think about the local versus invariance, and there’s still some questions there. But overall, the model seems to support these canonicalization features + +MM: Can I repeat back in my own words? if you’re loading—importing from a source, then the full specifier plus the SourceText itself is the key. But then, if you import from a full—if in that context, I you import from the full specifier from a string, there is no SourceText yet to compare, so the two design choices would be that you go out to the network and fetch the source in order to have the SourceText to complete lookup of the key, which is unpleasure. Or the other choice to say, this is where your primary key thing, I think, becomes a relevant part of the definition which to say, okay. I have already got the prior key, the full specifier, so rather than go to the network, I am going to assume that the SourceText I would get is the one that I have already gotten and then proceed under that assumption. Is that a correct restatement of what implied + +GB: That's the model. The details are the question and the hook design is the question. So I think there is—the way this is presented here, I don’t think it’s clear that import hook would never necessarily be called because you want to normalize the specifier and you could have alternative normalization. So effectively, I would say if you—if there was a way to pass a normalized URL as the full URL and you say that’s the thing, then that’s this model. Because of the fact we don’t have a model for URL resolution for key resolution, the non-source part of the key, this statement is not necessarily true in this framing. Sorry, I should correct this slide. But it was late. But yeah. The model you described is correct. And there is definitely some design space there. + +MM: okay. Good. I think I understand. + +GB: Canonical instances map one-to-one with their compartment so the instance is associated with the compartment and the compartment keys are the instance. If you go to another compartment, it will—the dependencies, and give you the instance fully loaded. If you were to go inside another compartment and import an instance that belonged to another compartment, you are not going to stop populating that compartment’s registry. Obviously, this design, you could throw and say, this is not allowed. You should only import things in your own compartment. Do you want cross-compartment loading, that could be an option where it will derive the other compartment’s loading to completion. The point being that only sources preserved between compartments, instances don’t transfer between compartments. They stay in their home compartment. They stay associated with their own compartment. + +GB: To try to summarize that, we do require a compartment identity for this source key to model. The canonical instancing model. There are questions of spec fiction and reality that need to be carefully considered. But both worlds are very much intact, and ModuleInstance and source combinesses, actually help us to write the spec text, in most cases, apart from having the ghost instances on clone source that are never accessible in non-instancing and non-multiunderstanding. As far as we have the invariance to separation, keeping the spec as simple as it needs to be until it needs to be more complex is better so we don’t try to refactor before we have all the design constraints in place. I have tried my best to explore the interactions as much as we could, but there’s some design work to go. Overall, there’s clearly only less work for compartments with the ModuleSource defined and go through all the transfer and import semantics that’s up. + +GB: so yeah. I am happy to have a discussion on compartments at this point, if you would like to. Kris Kowal? + +NRO: yeah. So question about the table that you showed. Within your compartment key. We didn’t have compartments today, but realms and they share similarities in which they are like some set contexts that—and share objects, when it comes to the web frames. Even though we don't have compartments today, we need to add the realm as the third entry of the composted key. I guess also for work—with other compartments its more granular division + +GB: I have also imagined that the—since we need a compartment field, on the ModuleInstance, that that would point to its realm. And so you could maintain that anyway. But I haven’t thought about that. + +MM: I agree. + +NRO: Okay. I think the answer to me is, yes. Because the map is already per realm. But I am not 100% sure about it. + +KKL: yeah. I wanted to—for one, thank you for framing the conversation in terms of compartments, it’s been helpful for us to invest in them. And apologies for not invested in compartments, I wanted to draw you in anyway. If we go back to a slide that illustrates—where the ModuleInstance constructor, and the compartment property and its handler, the—Guy is using a placeholder word for compartment. But this is more fundamental than compartment. And is an abstraction that lives beneath the layer of compartments that I think is well motivated for other reasons, specifically, Nicolo has pointed out in the shared structs proposal there would be intersection between shared structs and multiinstantiation of modules and compartments and such. Such if you had multi-instances of the same source, that contained shared structures, there is a relationship between the instance and which set of prototypes of those shared structs you are going to get access to and this exists within multiple realms of the same agent that already and all that compartments are adding is another level of indirection between the execution context and the so associated realm effectively. The—so the key in this registry would be moving from being keyed on the registry to the—what I am going to call for the purposes of this conversation, the cohort. That is to say, within a cohort, you are going to get a single registry of ModuleInstances and also, a registry of shared struct prototypes and these can be the same concept at this level. And again, apologies. There are a bunch of complications that Guy proposes that I do not think will survive to the final design. I think that in the end, the implication of the proposal that Guy is advancing, the ESM source phase imports, the implications of that landing ahead of module harmony is, for one, it seems likely to me there will be a simplification that module—that the module constructor with its hooks, as you see in the ModuleInstance constructor here, will probably just—it will probably need to simplify down to just being an options bag on the ModuleSource constructor in a future proposal because the model that this proposal establishes is one where there’s only a ratified ModuleSource that directly addresses immutable source under the—detachment hooks are semantics for the current realm, its association with a particular registry. And yeah. I think—I think this simplifies in time. I wanted to make sure my fellow delegates are aware that that is an implication of this proposal advance being than the fullness of harmony. + +NRO: So when earlier Mark restated to Guy about that we have the two choices, like go to the network to check if the source is the same, or not and just—so when we need to define canonical instance, we actually don’t have the choice between the two options. It’s already the case that the dynamic part will not go to the network. So that—the behavior is already done. + +GB: So if you had a ModuleInstance constructor, I guess the open question there; whether that instance has been injected into the registry to block further fetching or if the instance is existing outside of the registry in a sense. + +KKL: To follow up on that, I believe the implication is that the only way to have an entry in the register industry is to dynamically import the thing. + +GB: that’s very much the model that we’re working to. + +MM: So several things. First, just some questions about—further questions about key quality. When you talked about the eval-ed module expression, something surprised me in what you are saying; on transmission of the ModuleSource, that it loses its identity. That—basically, a new identity is regenerated on each deserialization. I can understand why that might arise as a constraint of serializing data, if you don’t want to imagine that you can create unique unforgettable identities in data. But other than that, it seems to conflict with the goals of, as I understood it from your private discussion last night, that the ModuleSource has a transmissible identity, where by the lookup equality, where the—where the key lookup equality is—that that equality is preserved across transmission. So if the same originating eval is transmitted to the same destination, multiple times through multiple paths, that has arrived, it’s equal to itself. Is that still desirer and is it the constraints of deserialization that caused you to give up on that + +GB: you could in theory define a cross-agent keying, unique keying. And have some kind of relation like that. I think there are a lot of benefits to making sure we don’t introduce new side tables. And so the most straightforward behavior was to just have structuredClone or serialization or deserialization to key evaluated sources. I would be interested to hear if there are other use cases for maintaining the key. It’s not something I have heard of as a desirable property. But it—yeah. It was more a case of implementing the most reasonable design as opposed to tying to enforce a new type of side table for a new use case. + +MM: I mean, just—the idea that it is a key locally, such that importing the same evaluated, you know, thing gives you back the same instance locally multiple times, it seems strange to have it produces the same instance for multiple importing by key quality. But if you emit through multiple locations and paths and receive from each path, you use as import, you have different instances, , that seems strange. It seems like a weird incoherent intermediate case. If you want to regenerate the key on every transmission, you should just have it not be canonicalized in the first place. So every time you import it, even locally, you get a unique instance. + +GB: So there are benefits for the local because we will—using a ModuleSource constructor in the same agents, or same compartment, you—there are benefits to being able to treat that as a cross-compartment key normally. The—so it’s only in the transfer with your of the key compartment because it’s a local key, not a cross-agent key. And early on we ruled out the idea of having key synchronization between agents to remove a lot of complexity, so it’s not introducing a bunch of complexity. So to put that back on the cards would have to be motivated carefully. It’s not out of scope. It’s something that could be considered. But it’s not something that has been strongly motivated today and there are definitely some high bars for chasing a whole new type of synchronization + +MM: to restate my words to see if we are in sync here: it’s in the abstract would be desirable to say that a ModuleSource has a transmissible identity that preserves module equality, but doing that cross-agent has complexity costs that are just not worth paying, so as an expedient matter, we are not going to contain key quality across sittings for the evaluated ModuleSources + +GB: Yeah. And to be here with evaluated ModuleSources on the web, at least, you would have to have an eval CSP policy enabled, in general the rooted sources as we call them, are the—generally like much more recommended path and that security hosts control sources + +MM: good. I understand that. The—I want to make an observation about the spec complexity. I am satisfied that as far as I can tell, you have succeeded at specifying an observable semantics that is consistent with the specs spec refactoring that we are postponing. That was an issue that came up hard last night. And I am satisfied that you did that. And very glad that you did that. The statement that the spec would be more complicated on the other side of the refactoring, I don’t believe. And that’s based on a previous exercise that I did exploring what refactoring would look like. But I do believe, and supporting the same end conclusion, that the effort to do the refactoring is a complicated effort. Not that the landing point on the other side of the refactoring is more complicated. But being given the effort to do the refactoring is quite a lot of effort. Postponing it as long as we are quantity, maintaining observable equivalence is fine. I won’t object to that. I wanted to register that I don’t believe that the resulting refactored spec would actually be more complicated. I think it’s actually simpler. + +NRO: I agree with everything that MM said. The refactoring makes everything much easier, I think, to read. But it’s a lot of work. + +MM: So since you are asking for 2.7, I will state my position there: I very much want to see this go forward to 2.7. You have successfully dealt with all of the things that were red flags to me yesterday. So congratulations. I am on the edge of approving it for 2.7. But I think I don’t want to do that today simply because of the size of the surface area of new issues to think about, and my uncertainty and fear with regard to whether I am missing something. And if I had had more time to think about the new issues, raised by the changes since what I understood last night, my level of fear might be reduced to the point that I would approve today. But I just—I just think we need to postpone and as we discussed privately last night, just—we are going to continue to discuss this in the TG3 meeting, which meets weekly, between now and next plenary, and I expect to be fine with 2.7 as a result of those discussions. + +GB: I am just going to quickly run through the last two slides and then do the formal request for Stage 2.7. As opposed to taking this as an immediate blocker, if you would be okay with that. I will jump to the very end. + +MM: sure. + +GB: So what we’re looking to achieve in the next steps is, as soon as when the import attributes perspective lands we will land the source phase PR, which this specification is based on. Source phase import now shipping the V8, soon to be implemented in node.js and Deno, after which it could seek Stage 4. To keep the module harmony trend going, the goal is to have this proposal closely followed so we can unlock module expressions and module declarations next year. If we can achieve 2.7, which can be the downstream HTML WASM, specification updates move toward coming back for a Stage 3 request before landing the HTML PR. So the HTML PR would not land before we seek Stage 3 and not regress the Wasm integration either without first getting to Stage 3 at TC39 and having everything presented at both groups. + +GB: To give a very brief demonstration of the spec, it’s a very small but a spec text to dynamic import and a couple of invariance on HostLoadImportedModule. And so it’s not a large surface area of change to the source, to the spec. But being able to achieve Stage 2.7 would allow us to be able to move forward with further investment in the proposal. I would therefore like to formally request for Stage 2.7. + +MM: So thank you for that. Those clarifications. I am still going to object, but—if you get consensus for 2.7 right now, then could we have agreed to a process where I am reserving my approval but could approve before the next plenary, in which case if we get conditional approval now, at the point where I am comfortable approving, then you can announce 2.7. Is that a conditional stage advancement that we could agree to? + +CDA: that seems a little bit awkward. + +MM: okay. + +CDA: yeah. If you have blocking concerns now, + +MM: it’s simply my degree of uncertainty, and that 2.7 is a green light to implementers to proceed to implement. And long experience on the committee says that once there are entrenched investments by implementers implementing this stuff, if there’s a mistake from my point of view that needs to be corrected, we—especially if the people who have invested in implementations don’t particularly care about the consequences of that mistake, the friction in getting the mistake corrected is much higher once they have been given the green light to implement and they have invested in implementations. So time to correct those things would be before 2.7. + +CDA: okay. There’s a couple of comments on the queue. NRO? + +NRO: yeah. Just that if mark thinking more about this implies there is some tweak needed, even if it’s just some integration, probably should be represented like without the conditional thing. Like, I am fine with the condition thing if the—what I am saying is that the condition should be—this is fine only if it ends up with Mark like saying, okay. Everything is fine. If Mark requests tweaks to the whole picture, it should probably be brought back for clarity. And then assume you are quick approval next time. But it should be presented with the tweaks + +MM: it makes sense to me + +DE: I agree with what Nicolo said. That if would—if any changes, bring back to committee for review. When have done lots and lots of these conditional advancements in the past based on someone needing a bit more time for review, including with Mark in particular, but other reviewers. So I think it makes sense to do here. We definitely need to work out all of the observable semantics before Stage 2.7. If we are not sure, we need to be and this conditional is a way to do that. At the same time, I just want to make a slight correction whether this is a signal to implement. The reason we separated Stage 2.7 from Stage 3 is because we want there to be tests present to, you know, to save the implementers’ time and everything. It’s optional to implement after Stage 3 and implementation sometimes happens before 2.7 for prototypes. But I wouldn’t consider Stage 2.7 to be the implementing signal. That’s it. So I support conditional consensus on 2.7, conditional on this being this proposal and Mark asynchronously signing off on it. + +GB: I would be happy to engage in meetings on a conditional progression. Under the understanding described by both Nicolo and Dan. + +CDA: okay. do we have support for 2.7? + +NRO: +1. if I can add, I was in the same boat as Mark. It took me a while to understand the spec text matches the implementation model. + +CDA: okay. Other voices of support for 2.7? I think Dan was a + 1, if I understand correctly. + +CDA: aside from Mark’s concerns and for the review, do we have any voices of objection to advancing this to Stage 2.7 at this time? Any dissenting opinions are welcome as well, even if they are non-blocking. All right. + +JHD: I would just ask that there be a specific issue where MM can comment when he has had an approval so that we can all be—have a place to follow and be notified. When the condition is met + +CDA: GB, would you create an issue in the proposal repo for the conditional 2.7 advancement and yeah. A home for those concerns and up follow approval from MM. That would be great. + +CDA: okay. You have 2.7 conditional. + +### Speaker's Summary of Key Points + +(earlier mid-summary): + +GB: So to summarize we provided a ModuleSource intrinsic. The spec text is complete and reviews and has all necessary reviews. There’s a possibility of editorial refactoring. We have investigated all of the stage 2 concerns. Cross-specification, defining key and this is based on the source record concepts. Including identifying necessary refactoring for the compartment specification. The semantics have been presented at both the whatnot and the Wasm CG without any concerns raised. + +GB: We presented the proposal semantics including an update on the Stage 2 questions. Including cross-specification work. ModuleSource keying. And equality. The behavior of dynamic import across different agents. And also, investigating compartments, interactions, and the refactorings in implications for future ModuleSource records and compartments. + +### Conclusion + +We obtained Stage 2.7 based on conditional approval from Mark for further interrogation of the semantics, through meetings at TG3, which will happen before the next meeting. diff --git a/meetings/2024-12/december-05.md b/meetings/2024-12/december-05.md new file mode 100644 index 00000000..80f08d66 --- /dev/null +++ b/meetings/2024-12/december-05.md @@ -0,0 +1,450 @@ +# 105th TC39 Meeting | 6th December 2024 + +----- + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Jesse Alama | JMN | Igalia | +| Istvan Sebestyen | IS | Ecma | +| Gus Caplan | GCL | Deno Land | +| Dmitry Makhnev | DJM | JetBrains | +| Andreu Botella | ABO | Igalia | +| Keith Miller | KM | Apple | +| Eemeli Aro | EAO | Mozilla | +| Richard Gibson | RGN | Agoric | +| Ron Buckton | RBN | Microsoft | +| Jirka Marsik | JMK | Oracle | +| Jack Works | JWK | Sujitech | +| Samina Husain | SHN | Ecma International | +| Daniel Minor | DLM | Mozilla | + +## Vision for numeric types in ECMAScript + +Presenter: Shane F. Carr (SFC) + +- [slides](https://docs.google.com/presentation/d/1Uzrf-IwPrljF2BhCbCWuwQxlgGSm_bcd3FRbPO3Yrio/edit#slide=id.p) + +SFC: Hello everyone. You can see my slides. So a little preface for this presentation: we’ve been going back and forth for a little while now regarding different number related proposals and it concerns me that we haven’t taken a holistic view at how numbers work in ECMAScript and how we want them to work moving forward. We’ve been sort of narrowing in on let’s solve this little problem here and solve this little problem there. So my goal for this presentation is to sort of have a discussion about what we want number—how we want numbers to work in ECMA script in general and then that can sort of give us a framework so when we work on the other proposals we can see how they fit in with the big picture. So that is sort of the goal of this presentation. + +SFC: So here is what I have on the agenda. So first I want to talk about what we currently have. Then I want to talk about problems that I’ve heard that delegates wish to solve and the process of making this presentation, I spoke with a number of other delegates. I sort of synthesized these into five unique problem spaces. And then the third is possible ways to solve the problems and then last one is opinions of delegates. Not just Shane’s opinions, but opinions of several delegates. + +SFC: Starting with background on what we currently have. So we have these two numeric types. Number and BigInt. Number has been around for a long time. IEEE 64 bit floating point approximately and it does funny things with NaN and Infinities. I have this little line the domain is real numbers to distinguish it from bigints where the domain is integer. In integers, one thing that’s different about integers is unlimited significant digits and the number we have only what fits in the IEEE 64-bit floating point but we only cover the domain of integers. Let’s talk a bit about numbers. + +SFC: So hopefully people are familiar with this. What is 0.1? 0.1 in memory is represented like this as a 64-bit floating point IEEE floating point number. The bits of the 0.1 are broken down into the sign exponent and mantissa and there’s the floating exact representation or the fully—the full precision value actually is what is 0.1 followed by a bunch of zeros. After you get past 15 significant digits you get some things and always ends with a 625 at the end because it’s a base 2 number. So is 0.1 really 0.1? I think this is a question that has confused me and confused a lot of other people when I talk to them about it. + +SFC: So is 0.1 actually 0.1? Really interesting question. Because the IEEE floating point numbers are discrete points in the number line, right? And every particular valueOf an IEEE floating point can be represented in one of two ways, right? It can be represented, you know, it’s a binary representation in memory on one hand, but also the shortest round trip decimal. There’s a lot of algorithms. Every engine ships an algorithm for computing what the shortest round trip decimal is. There is a unique representation of 0.1 in IEEE floating point number. So if you have that—those bits I showed on the previous page, like, that is 0.1. So, yes, it is. But it’s also not. So it depends on how you interpret it. If you inter present it as decimal it’s 0.1. If you’re interpreting it as a binary, that’s the other thing. That’s the important distinction to draw. When you do arithmetic, you always do it in binary space. Arithmetic uses binary representation of the number in order to do the arithmetic. This is why you get things like this. 0.1 plus 0.2 is not equal to 0.3, it’s equal to the binary floating point number that is one tick above 0.3 that I have here on the screen (0.30000000000000004). And that’s how binary floats work. And that’s how they work. That’s why they do what they do, right? + +SFC: So with that little bit of background, I’m going to go into problems. But I’m going to open up the queue first to see if there’s anything on the queue. Doesn’t look like it so I will just keep going along. I will go ahead and talk about the problem space. So what I did is I wanted to synthesize down to five core problems that I see that we have in terms of things that the language doesn’t currently do, like, issues that we would like to be able to solve. So problem 1 is arithmetic on decimal values. I sort of synthesized this from the readme file of the decimal proposal to try to summarize what I see the use case being of that proposal. So when you’re doing financial calculations like calculating sales tax, for example, you want that to be done in decimal space. You don’t want that to be done in floating point space. There’s specific rules that have to apply. Those rules are much based on arithmetic as you learned in second grade that is in decimal spaces and not in floating point space. So that’s something that we don’t currently have the ability to do in the language. You don’t have in the language a built-in mechanism to do 0.1 plus 0.2 equals 0.3. We don’t have a way to do that right now. That’s one feature that’s missing, arithmetic on decimal values. The second feature missing is representing the precision of numbers. So there’s a thread that I wrote on the decimal proposal repo explaining this idea here. Depending on how the number is written, it may be spoken differently. + +SFC: Therefore it depends on how you internationalize it. You say “1 star” because that’s singular. But “1.0 stars” the zero at the end the plural form, even in English, and it’s sort of interesting that this shows up in English, which is when it comes to grammatical plural cases and things like that has not as many rules as other languages do such as Polish and Arabic and Russian and things that have more rules. The fact it shows up in English means this is very much a very common widespread problem here. So why do we care about representing precision? For Intl, with these different ways. When we format the number we want to know what we’re formatting and decouple as much as possible the international step from the representation step. I have a long post on GitHub if you want to dive more into this topic. + +SFC: Two is, we want to interop with systems that retain precision. Among other IEEE decimal systems, most retain the zeros. I have on GitHub and done the analysis and look at what languages of Java and others do retain the zeros. To fully round trip we need the capability. The third is finance and scientific computing. There’s some other people who posted allegation on the issue that have noted that trailing zeros are important when it comes to the financial calculations that are exactly the ones that the decimal proposal is aiming to solve. I make a note here that the IEEE reckoning of precisions is primarily focused on sort of the financial use case and scientific precision could have different ways of being represented. And then four is possibly HTML input elements. So that’s sort of problem space two. We want to represent the precision of numbers. There’s a lot of use cases for this. That’s not something that we should leave out. + +SFC: The third problem is representing more significant digits. So the number type is limited to 15.95 on average decimal digits. That means that 15 decimal digits is safe to assume. 15.95 is the average, which is enough for a lot of cases, but not enough for every case. For example, large financial transactions, things on the order of bitcoins could exceed that limit. Interoperability with decimal128 is also an issue here, because if you have a system like Python or Java that uses decimal128, it may have more than 15 significant digits and want to operate with it. And the third is big data and scientific computation. From time to time when I’m training my machine learning models, I do sometimes run into this issue where I have like two weights that are very close to each other and then I try to take the difference and then all of a sudden I’m down to three significant digits and that’s not always helpful. There’s definitely use cases in that area. So that’s sort of problem space 3. + +SFC: Problem space 4 is unergonomic behavior. I could have put a few more examples on this slide but we should have a numbers framework that just works. So we want to be able to make sure that programmers can avoid the footguns like 0.1 + 0.2 in order to have something that works for users that doesn’t have the mistakes that you can easily make. + +SFC: Problem 5 is associating a dimension with a number. So, for example, we want to be able to take not only the point on the number line, but also want to be able to take the unit that’s being represented. For example, dollars or meters. Why do we need this? Because in Intl.MessageFormat, `Intl.PluralRules` and so forth, this is something that we want to have as part of the data model and also feeds into the unit conversion measure proposal and avoids a certain class of programming errors. After my talk, EAO will go into more talk to justify this problem in case people are not convinced this is a problem that we need to solve with the language. EAO in the next time slot has an excellent slide deck to go into more of the motivation behind problem Number 5. + +SFC: I see JHD has questions. Before I get to those, I will go ahead of the next section of the slides. I think they might be answered there. + +SFC: So a non-issue and I want to sort of emphasize this, because this is something that is sort of, I think, been a point of confusion, is that a non-issue is being able to represent decimal values because as I showed earlier in the deck, as long as when you have your IEEE binary floating point number and you say I’m going to interpret this as a decimal, you can represent decimals exactly. Like, 0.1 does triple equal 0.1 if both created the same way and normalize them the correct way, they will equal each other. That is actually a correct representation of 0.1. So representing decimal values is something that we can do in the language, it’s not necessarily type safe and goes into problem 4 and maybe not ergonomic but it can be done. The problems that we often see arrive when we normalize numbers. We don’t have decimal arithmetic and we are able to represent it if you interpret the number in the correct way. + +SFC: I’m going to go over some solutions now. The solutions are not in any particular order. I sort of put them in this order in order to most easily explain how they—what the different aspects are of these different types of solutions. When I say solutions, I mean how all the different problems we’re trying to solve and how they can all fit together in one cohesive package for developers. + +SFC: So solution 1 is the measure proposal. Measure proposal that BAN presented at the last plenary and EAO will describe more today. It’s a number and precision and dimension. So a number is currently a JavaScript number, a point on the number line. It can also possibly support current and future numeric types like BigInt and things like that. So precision is the number of significant digits and then dimension is the unit. So this solves the precision problem and the dimension problem. It’s possible that decimal math could be included via prototype functions. It’s possible that you could support more digits via string decimals. If the number type is sort of abstract, right, then it’s possible that we could add additional functionality to sort of say if the number is a string, go ahead and do decimal math but do it and then you basically use string as the sort of type where you encode the arbitrary precision decimal value, right? Without actually exposing it directly, it would sort of be inside this wrapper. Measure could be an all in one solution where, you know, it represents all these things. Dimension could be null if you just want to be able to represent a decimal value without any unit attached to it. Set dimension to null. That’s fine. And then otherwise you can sort of use this one package that sort of has all these features and solves all the problems except not necessarily ergonomics because it doesn’t necessarily give a direct way to do the 0.1 + 0.2 as a primitive. + +SFC: The next type of solution is decimal 128 with precision. So IEEE decimal 128 basically is an encoding over 128 bits that is able to represent numbers with quanta and precision. Quanta and cohort. JMN talked about this previously in previous plenary meetings. So decimal one, if we add such a type in ECMA script, we could add a type that is fully conformant with IEEE. Measure no longer needs precision and decimal needs precision. We solve the precision problem. One concern that I heard when discussing this with folks is this concern of precision propagation. IEEE gives a specific algorithm if you have two numbers and then you multiply them together, it gives a very specific algorithm for how you calculate the output and how many trailing zeros the output has. That algorithm is sometimes surprising how it behaves. I’m told it’s based on a certain set of rules for how you do financial calculations. But that’s not necessarily a generally applicable algorithm. Another concern is this idea of equality operators. If you have a decimal value of 2.5m and another of 2.50m you want them to be equal because they represent the same point on the number line. The representation in memory is different because they have different precisions. And do you include the precision as part of the equality operation or not? And there’s been some debate about that and it causes concerns especially when we look at what the behavior would be with primitive values because that’s much more tight if decimal is an object we can have two equality functions equals and total qualities, that’s what Python does. When it’s primitive, we don’t have that luxury. + +SFC: Solution 3 is decimal 128 without precision. And this basically means that within the decimal 128 space, we only represent the—we only include the numbers that don’t have trailing zeros and ones that do have trailing zeros, we just don’t expose, we don’t export those from JavaScript. If you have a decimal 128 that has trailing zeros, like, that is not something that you’re able to represent as a decimal 128 in ECMAScript. So the main benefit I’ve heard from this is it’s potentially better for a future primitive decimal because this makes the equality operators behave the way that, you know, certain delegates expected they behave which is nice. A concern I have is that the unused bits are wasteful because IEEE gives us a framework to be able to represent precision in the same bits that the decimal is represented. Overall if you take every bit pattern that could possibly represent decimal 128, 10% of them have trailing zeros. Numbers less than 20 significant digits, a common use case, over 90% of them are able to be represented with trailing zeros. We lose the ability to represent those values if we sort of have this limitation. Storing precision as separate is possible and doesn’t work as well with arithmetic operations and so forth. The other concern is not interoperable with decimal 128 and precision is part of the data model and support other languages and we use the ability to have interop. This is the concern that I raised in Tokyo when this was presented. + +SFC: Solution 4 is DecimalMeasure. This is a new one I’m sort of throwing out there to put it in the field of possible approaches that could be taken is the DecimalMeasure approach. So the DecimalMeasure is we take the idea of a measure but then the measure instead of wrapping a number in precision, it wraps a decimal with precision. And it associates that with a dimension. This could have decimal semantics, a future primitive decimal can still be its own type and sort of emphasize that. It could be composed. The type decimal measure could be composed with fully normalized primitive decimal? There’s no reason that that can’t be composed because these are two different enough types that they could co-exist in the same universe. And alternative i18n focused decimal measure. The one way to think about decimal, think about measure is it’s just an input type for Intl operations. The other is general purpose useful type with other operations on it. So decimal measure can sort of take either shape. + +SFC: Solution 5 Number.prototype. I want to talk more about this one. I posted this more in the decimal repository. Since Number is able to represent a decimal value but you can’t operate on the decimal, that’s the main foot gun. Decimal add could be a prototype function and the prototype function just defines to say if you have 0.1 and 0.2 you add them up as if decimals and get a decimal on the other side as a number. I hope that makes sense to people. And this can be a function of the prototype. There’s sort of a couple ways it can be exposed to developers. It could be exposed with a new operator. Since these are already primitives we can spec out an operator and another is JSSugar or TypeScript. TypeScript can introduce a type called decimal number or something like that. And in TypeScript land, if you use a plus operator on the decimal number, it gets compiled to JavaScript as a.decimalAdd(b). This is a nice way of JSZero & JSSugar to work together and you have the abstract layer and then the built in layer. It’s a minimal change you have to make on the built in layer. It really gives the ability for TypeScript and JSSugar to do something on the user facing layer of the API by exposing this primitive operation called decimal add. I’ll keep going through. + +SFC: Solution 6. There we go. This is one I brought up. I haven’t gotten a very specific clear signal from any engines yet. I sent some inquiries and haven’t got an answer whether it’s feasible. It’s an interesting idea. We have the existing BigInt type. What we could do in principle is we could—again, I don’t know if this is feasible or not—add a field to it for the scale. And then the scale could represent a decimal value. And existing BigInt would work exactly the same way that existing BigInts work. If you construct them, they’re fine. If you compare them, they’re fine. Everything works as expected. However, you are able to construct a BigInt with the scale. If you do that, what you get is a decimal BigInt. There’s some questions here about what you do with the slash operator if you have two BigInts and divide them, like, that would probably have—that would have to maintain existing behavior. So we probably have to add another operator that does a decimal divide, for example. Another concern here is like we evaluate the risk of changing BigInts domain. For example, if there’s a program that assumes you had the interrogee and maybe do index to array and then pass in the BigInt and now it’s not an integer anymore, could that be a problem and evaluate that risk? And of course feasibility. It’s a solution I want to throw out there. I haven’t seen anyone actually give a definitive answer that no this is not feasible. I think it’s an interesting avenue that could be explored. The benefit is gives us a primitive right out of the gate because the primitive is already there. So that is solution 6. + +SFC: Now I will just go through some opinion slides and then after these, we’ll open up the discussion. I’m glad I booked an hour for this because I think we might need it. So my opinion. I try to make the slides as neutral as possible. Some of my biases may have slipped in a little bit. But my opinion is that I think we should leverage iEEE to represent a precision. Because IEEE gives us a way to do it. It’s very well defined and interfaces how other languages solve the problem. I think we should leverage IEEE to represent the precision. I think we should leave the door open for primitive decimal. I don’t think we should design around a primitive decimal today. I think a primitive decimal is something that we should leave the door open for in the future. I think we should design a good object type for dealing with these numbers because that’s what developers will have today, and what developers will have for the next probably decade or so. And even in a world with a primitive decimal, developers are still going to be using objects. We should try to design a good object decimal. And if we introduce a type that sort of makes it harder to add an object decimal, that’s sort of a problem. We should leave the door open. I think we should focus on building a good object interface for decimals. My third point is DecimalMeasure seems like could be a decent solution and sort of solves most of the problems in one package. Leaves the door open and I sort of wanted to float that out there as a possible approach. The main push back I heard there, sort of scope creeps the measure proposal and merges the problems and solve in one way than another way. + +SFC: I asked NRO for an opinion and this is what he said. He sort of pointed out with Temporal. In Temporal I’m also a co-champion for Temporal. We designed 7 different types with different data models. There’s a plain time and zone time and instant, right? And there’s sort of the universe and then when you’re inside one of those little types, no matter what you do with it, it will always be well defined. I think that’s cool we did that with Temporal. Maybe there’s an opportunity to do that with numbers. NRO, I don’t know if you wanted to add anything to that. + +NRO: I think you represented it somewhat well. What I like about Temporal is that don’t have all the sub slices of the whole model. You don’t have to worry about things you don’t need. You can have a PlainDate and you don’t have to worry about the TimeZone. Also if you have some where we expect ZonedDateTime it’s easy to check. And not somewhere else. You don’t risk using the wrong thing because we have a good runtime type system there. So I would—if we’re going to have different types of numbers like many more variations, I would hope we go in some direction like that where for example I don’t have to worry about a dimension if I don’t care about that and same the number and mention and not one without. I don’t accidentally use the wrong operations of banner versus decimal. + +SFC: Then I will go ahead and move on to JHD’s opinion. And again once we get through the slides, we can go to the discussion. I want to focus on the opinion slides right now. I just did SFC and NRO and JHD has his turn. After talking with JHD, you know, we sort of established the primitive decimal is a really good long-term solution, because it solves the ergonomic problems and some of what NRO was talking about with the type system and know what you get in and get out. But a solution that solves only a subset of the problems without a clear path forward makes us in a worse solution with a long-term solution. Imagine we do a have solution today that did a little bit and not all the stuff. And then in a world where we can add a decimal primitive, it’s now harder because we have this new type that we have to inter-op with. If it wasn’t there, we could add it clean. We can be where we are now and add a clean decimal primitive and everyone is happy. It sort of muddies the water. JHD, I don’t know if you want to add anything. + +JHD: This is a good summary. I also spoke a bit during JMN’s presentation in the previous plenary about my wider vision. I have some more specific comments but can wait for the queue. + +SFC: Cool, thank you. So then I have EAO. Not all these problems need to be solved in the standard library. So the I18N conditions could be solved with the thin measure protocol, you know, with these precision dimension string decimals. Do we need a type that solves all the problems? Maybe we just need to sort of solve the one concrete use case that we really have today is this use case of how we interop with for example solve measure format and design the protocol we don’t need to muddy the waters with decimal and leave that open just to solve in the future. We don’t have to think about solving the problems now. We do need to solve the measure problems now because inter op with native types with primitives with primordial types it needs the protocol to read the message from. Maybe we have to focus on that problem space. I don’t know if you had anything to add. + +EAO: I have half an hour to continue on this topic later. Nothing more at this time. + +SFC: Okay. Thank you. So I threw in this extra slide yesterday just because sort of thinking a little bit more about what NRO and others had said and there’s sort of a little bit of a composition here, you know, like there’s sort of the three things that could be layered on top of each other. You have the normalized decimal 128 and full decimal 128 that has the cohorts in it and then you have your measure which has also dimension in it. So think about how the types compose, this could be sort of one framework that we could use. I have no more comment on this other than just showing this slide. It’s just a brainstorm. Thank you so much for hanging with me through the presentation. I think we have half an hour to continue with the queue. There’s quite a queue to discuss. I’m happy that people are interested in this subject. So with that, we can—CDA looks like back to you. + +JHD: So like the second or third slide, I’m not aware of any system where 1 star and 1.0 stars would mean different things. Every star system I know of that’s not talking about stellar phenomena is either in increments of 1 star at a time or half star after a time. Anything more granular than that gets hairy as a visual representation. Can you elaborate on when those are different? + +SFC: So I think you’re talking about the Problem 2 slide. So I posted a lengthy essay on gitHub and I think you read it before. Basically my evidence that 1 and 1.0 are different things are the fact that they create different pluralizations. Even if they represent the same point on the number line, they need to be handled differently in software. Because one has no precision and the other has some precision and therefore need to be treated differently in software. The fact they need to be treated differently in software means we need a way to represent it. + +JHD: Thank you. That was good. And then my next queue item, when you’re saying precision, like, I feel like that word is used to describe two things. One of them is I think the previous slide probably, if I remember correctly, which is—maybe a different slide I was thinking of. Any way, one of them is supporting enough decimal places to do math. So when you said—thank you. This one, Problem 3. If you have a 20-decimal-digit precision number, you need to be able to do math with it. But then the second bucket is from science class and stuff where you actually care about the underlying precision of the numbers you’re using and combining those and all that. And so I think it’s—I don’t know how to differentiate those two. I think it’s important to try to figure out which one of those two or both we’re talking about when we talk about precision. Personally I find that the first bucket which is just supporting very fractional numbers is very important. That is something that needs ergonomics and accuracy and perhaps deserves primitive support. But I think the second bucket is something that is important and perhaps could be satisfied by user land or API only solution. + +SFC: Cool. I guess just to clarify what I mean in the presentation, I used the word precision for trailing zeros and significant digits to refer to the number which is sometimes called precision in other cases. I try to use the language in that way. I try to be consistent with that. I think I was consistent in this presentation about those different words to represent those things. WH looks like you’re next. + +WH: I have some questions about the bit pattern concerns on the slides. Why do you care about bit patterns of numbers? + +SFC: Why do I care about bit patterns? I can say why I care about bit patterns. So in the all in one measure type, we have an interesting issue where we have like a number which is 64 and it’s one chunk in memory if it’s in the future of a normalized primitive decimal. That’s the 128 bits. All of a sudden we have this precision and dimension field. Dimension is probably a pointer or an enum and more likely a pointer to a string value or something like that. And then we have this extra precision value which like what is it? It’s sort of a big bucket of like things. It could be a number of significant digits , it could be number of fraction digits , it could be error bars, for example. It could be a number of different things and on the one hand that’s cool, we have the flexibility. On the other hand, it’s a big muddy murky space. IEEE with the bits sort of gives us a way to represent that compactly. We can eliminate the extra fields from the measure type. We pack it all in to the 128 bits of the decimal type. Engines don’t have to worry about supporting this extra field. We don’t have to worry about figuring out what the extra field does. And we leverage the existing machinery that IEEE already gotten us. Does that answer your question? + +WH: I don’t understand the concerns about wasted bit patterns. Using Decimal128 to just represent points on the number line representable in Decimal128 requires 128 bits, so there are no wasted bits in the representation. If you count the number of possible values, there are 340 undecillion possible 128 bit patterns out of which there are 221 undecillion possible points on the number line. You can represent those in 128 bits. You cannot represent those in 127 bits if you want a fixed-with number type. As far as wasted bit patterns, the bigger source of waste is actually the base-1000 representation Decimal128 uses. There are Decimal128 values that have thousands of possible bit patterns all representing the same number. That’s due to its using the base-1000 representation where each digit uses 10 bits. So it seems like a bit in the weeds to worry about Decimal128 bit pattern efficiency. I’m not sure why that should have any effect on our proposals. + +WH: The other thing I’d like to note is that on a later slide you discuss the BigDecimal proposal, calling it BigInt. That has issues which have been well-discussed which are not on the slide. When reviving proposals like that it would be good to replicate the main concerns about them on the slide. + +SFC: For the second point, if you can—I did a little bit of looking around. But I didn’t find that. It may have been—I would like to read more. If this has been discussed, I would definitely like to read more about it. + +WH: We spent many hours on this. The primary concern is runaway precision with multiplication. + +SFC: Cool. I would like to read more about that. And regarding your first question about wasted bit patterns, you know, another sort of thing that I didn’t put in this deck which I think is maybe worth mentioning is that if we’re going to have 128 bits and we’re not going to be representing precision, we can actually get a little bit more out of it if we did float binary 128. IEEE do binary 128. If the whole plane is to represent precision, we could use binary 128. Using decimal 128 is not as efficient for doing—I will not be doing machine learning with decimal 128. So I might need to do things where I really care about precision like financial calculations. I won’t do big data with 128. Binary 128 is another thing if that’s really the thing we care about, you know, that’s the more efficient option any way. + +WH: For machine learning you want the least possible width because it’s faster. + +SFC: You want the least possible width that gives you correct results. And 64 bit is usually enough for that. + +WH: We could debate what “correct” means. Anyway, we’re going off into the weeds. Let’s move on. + +SFC: NRO is next with a comment about this one. + +NRO: It’s more about JS sugar than numbers. When we talk about JS sugar, we always dream about what tools could do but not actually able to do. I see RBN on the queue and won’t speak for TypeScript but everything except for TypeScript, any type of like type-directed conciliation that affects is starter and run way is the same for TypeScript second. + +RBN: I concur with NRO on this. TypeScript’s position is not doing type directed emit unless able to statically determine that syntax can only be used a certain way. We would not be able to transpile it. Something like ~+ that is everything is ~+ is always transpiled to thing on the left dot decimal add or something like that. Yes, that’s feasible. That’s something we can always do regardless of what the input value is. If it’s something like transpile plus, we can only do that if we transpile plus for everything that would slow down everything. We would not be transpiling plus. That is not something that we would be able to do. + +SFC: Sort of going on the point, then, even if you don’t transpile plus, is there still the possibility of writing a lint or TS lint and use plus on the decimal type and maybe meant to use ~+. + +RBN: That’s something that is essentially feasible and not going to catch everything. If we know the type is decimal type, that is something that you could be warned by. + +SFC: Okay. Thanks for that comment. Looks like EAO is next on the queue. + +EAO: Just continuing on this same slide. Hopefully a quick question given that we have the math.some precise proposal currently at Stage 3, I’m wondering doesn’t that actually provide a solution for the use cases that something like decimal add or ~+lus would be doing and then the concerns here would be going further from there and ergonomic concerns that need to be improved regarding what `Math.sumPrecise` is kind of what we’re already doing? + +SFC: I don’t know if KG is on the call and could make a comment about that. I think WH is on the queue. + +WH: `Math.sumPrecise` gives you precise binary addition. `Math.sumPrecise` of a set of numbers will always be equal to the mathematical sum of the numbers rounded to the nearest representable IEEE double value. When adding two numbers this is always the same thing that the built-in `+` operator does. When adding 0.1 and 0.2 `Math.sumPrecise` by definition will likewise produce 0.30000000000000004 because that’s the nearest representable IEEE double. + +SFC: Just to echo that. I tried out the `Math.sumPrecise` polyfill and it had that behavior. So unfortunately that proposal doesn’t solve this problem. It has to be another proposal. + +KG: I was on mute. You can’t solve that problem as long as you’re using Numbers, because the Number 0.2 is not the decimal number 0.2. It’s the floating point number. Something more complicated than that. + +SFC: Looks like MM is on the queue. + +MM: Yes. So let me start by asking you ayou a rhetorical question. If I ask you to write down two-thirds to four significant digits of precision, what would you write down? + +SFC: Two-thirds to four significant digits of precision? This is a little mental exercise? + +MM: Yeah. + +SFC: Well, I mean, I would like to—I would have to be able to know what rounding mode that we’re discussing. Maybe I think—if we’re assuming like half-even rounding, like, that would mean it would go—the last digit would round to a seven. + +MM: How many sixes would you write down before you wrote down the seven? + +SFC: That would be 0.6667. That’s my mental model. + +MM: Okay. Good. Thank you. So the question was rhetorical. The point of it I’m making, the larger point I’m going to make, there are many different notions of precision and I find that the one bundled into IEEE decimal 128 is not any of them in a coherent manner. That in particular the notion of precision that you’re emphasizing when you talk about 0.1 stars is a display notion of precision that is usually static. It is usually not a degree of precision that is data dependent. It is for all the data flowing through a given call site or all the data flowing through a given parameterized system and more statically parameterized than individual units of data. I will note in the example that I just posed you that is not what the IEEE will render for two-thirds no matter what the non-normalization is, because it’s not an issue of trailing zeros. It’s a question of overall total digits of rendering. If you’re in a context where what you want to see is numbers rendered to four digits of precision, and there are many such static contexts, rendering two-thirds at all possible sixes followed by the trailing seven is what you get directly out of IEEE and not what you want when you’re trying to use precision to colour a display. The other notion of precision that I think is coherent is something to capture the notion of error bars. And there are many different ways to do this. There are many different theories of that. There’s statistical error bars where you’re trying to propagate through one standard deviation of error under some statistical independence assumptions and then there’s a lower bound and upper bound and trying to propagate through worse-case error bars. So you’ve made the point that—you agreed with the point that the scientific notion of precision, which is intended to take into account error bars, is certainly not what IEEE is doing. I don’t see any theory of what IEEE is doing that actually meets well any use case. So I’ll let that be my first question. And then I’ll put myself back on the queue. + +SFC: I can respond to that a little bit. So first of all, you know, as I think I mentioned this a little—this also came up in JHD’s point which is that like the word precision has multiple meanings and different context that is a little bit unfortunate. In this presentation, when I say the word precision, I’m referring to precision as needed in the context of `Intl.NumberFormat` and talking about it in terms of the number of trailing zeros. That’s different than significant digits, which is representing precision in terms of like how many digits of a number are you able to represent? So trailing zeros versus like total number of digits that are able to be represented. + +MM: So in the Intl.DisplayFormat, if you’ve got two-thirds, and the display format is suggesting four digits of precision, how would the Intl number rendering render the two-thirds value? + +SFC: So currently `Intl.NumberFormat` has the ability to encode rounding options in the options bag, and that’s a utility – + +MM: I’m not that concerned about whether the last digit is six or seven. I’m concerned about how many sixes are displayed before the last digit. + +SFC: So it depends on—so `Intl.NumberFormat` allows you to configure if you want to round through a number of fraction digits or a number of significant digits. If you choose four significant digits, that’s what I said earlier which is 6667. If you specified you want four significant digits. + +MM: So does `Intl.NumberFormat` actually have any need for the display format that comes bundled with the IEEE definition of IEEE 128? + +SFC: Yeah, okay. I can definitely answer that question. I have a little bit of a thread about this on GitHub. But this idea of being able to fully decouple display from the quantity being displayed is a thing that helps us fix bugs in how we, for example, interoperate with PluralRules and NumberFormat and allows us to be able to more correctly express numbers into `Intl.NumberFormat` and allows us to potentially interoperate better with HTML input elements. It decouples as much as possible from Intl and as we’ve been working on these Intl APIs, the more the making Intl APIs focused on how to internationalize the number and how to take the data and put it in a form that can be displayed and as much as we can decouple those two things it tends to solve a lot of problems. That’s sort of the idea for like why having precision in the data model as opposed to just being formatting options is a desirable outcome. Obviously remain formatting options because it currently is. But it would be nice to be able to put it in the data model. + +MM: I’m sorry. I didn’t understand how you got through the first part of what you just said to the second part. + +SFC: Maybe NRO can give an example. He’s on the queue. + +NRO: I can give an example here. In Intl currently, when you want to, in this case for example, display 1.0 stars, you have two different Intl functions. One that gets Number 1 and converts it to the string “1”.0. And then you have another function that gets Number 1 and gives you back the string “stars”. And you need to make sure to configure these two functions the same way and tell what functions the numbers will have two digits. We’ll have one digit after the dot so that they are coherent so they don’t give the string “1” and the string “stars” or the string “star” and the string “1.0”. And right now you have to—given that this settings are not saved together with the number, you need to make sure to pass the coherent settings to all the functions while by having this encoded in the number itself means that you don’t risk accidentally getting the various functions out of sync. + +MM: So if the actual underlying number was 1.1111 and you’re rendering it in a context where you wanted to render it to one digit of precision it would be rendered as “1” and when it’s rendered as “1” would still be singular and the rendering it as “1” is not a rendering that IEEE provides you because the IEEE degree of freedom is only trailing zeros, it’s not overall precision of display. So I just don’t find dynamically tracking trailing zeros as the degree of freedom carried dynamically in the data to be coherent. It doesn’t match any use case that I can imagine. + +NRO: Yes, I agree with you here. What is important for the Intl as presented is to have the number together with a number of trailing zeros. But it’s not really necessary for it to track this number of zeros across operations. You usually would want to just set the precision after you’re done with your computation. + +MM: But when do you care about number of trailing zeros as opposed to just number of significant digits? + +SFC: I mean, I think number of significant digits could be one like way of representing the number. I think in many cases, that is the thing that Intl would need. But that can include trailing zeros. If you say, well, I want to render this number 1 with two significant digits, like, that’s something that can be encoded in the data model. IEEE gives us a mechanism for encoding it in the data model. To finish my point, I think you’re sort of discussing a little bit about this concern, the first concern on this slide here, which is that like the way that IEEE deals with precisions across operations is kind of unexpected in certain situations. And that’s not necessarily the problem that Intl needs. Intl just needs it in the data model. Intl doesn’t care how it’s propagated. + +MM: Intl doesn’t need trailing zeros. Intl needs total number of digits whether the digits omitted are zeros or not. So if I was in the context to see something to three significant digits and the actual number was one, I would expect “1.00” to be displayed. The trailing zeros comes from the DisplayFormat at the point of display. It’s static. It’s not dependent. It’s not carried with the data. I still have not heard a use case where what’s dynamically carried with the data is only number of trailing zeros rather than number of digits to show. + +SFC: Yeah, I understand your point. But I want to make sure we get through the—we’re pretty close to time. If—if Nicolo, if I can jump ahead to NRO. If you can make your last little comment. + +NRO: Yeah. He would also like to hear from JMN, but I was trying to encourage other people to give their opinion here. We have heard from a few people today, and this same people already discussing all this a few weeks ago in other meetings. It would be great if the rest of the committee also like expressed some expression or their feelings + +SFC: And yeah. JMN, you said in the queue that you like 3, 2 and 6 in that order. Is there anything else you wanted to add to that? Or elaborate on why. + +JMN: Yeah. I think 3 is the state of affairs today. 2 is what we had, I think, one or two iterations before that. 6 is interesting because it is a kind a path to being a primitive today. But as WH said, there’s some big concerns about that, with values getting extremely big very quickly. But maybe just a general point, why would I prefer these three things? It’s because to my mind, they clearly separate the measure idea from the decimal proposal, which I understand to be something focussed on numbers. We can debate whether that’s mathematical values or things with some precision on them or not. But it’s still—at least as far as I understand it—somewhat separable from the measure idea, which is a nice, I think, independently-motivated proposal. So that’s why I would list those things in that order. This is fantastic. Thank you for organizing the presentation. + +### Speaker's Summary of Key Points + +SFC: The goal is to take a holistic approach how we want numbers, precisions and measures and dimensions to interop together to give ECMA developers a cohesive, well-designed architecture. I went over several of the different problems spaces, as well as some of the different possible solutions. We had some good discussion regarding, you know, what should be represented in the type system, some good discussion involving, you know, what is precision and the different ways to represent precision. And I think the—you know, next action items are for the sort of number-related champions to dive, to continue to sort of iterate on this and come up with a, you know, architecture that solves all of the problems in a clean and future-proof way. + +SFC: Does that sound about right, NRO, JMN, et cetera? + +NRO: Yeah. + +CDA: Okay. Thank you, SFC. + +## Measure Stage 1 update + +Presenter: Eemeli Aro (EAO) + +- [proposal](https://github.com/tc39-transfer/proposal-measure) +- [slides](https://docs.google.com/presentation/d/17ypyikW1q8RFf5AnnYKpe5dsdrHTb0BnSzZGaq0mm-I/edit?usp=sharing) + +EAO: This was supposed to be BAN presenting, but as he’s on medical leave I’ve stepped in. I needed to put the presentation together yesterday, so apologize for rough edges and so on. + +EAO: This is something like a continuation of the previous discussion, but looking at the—maybe not how to define a number part of this. Measure as a proposal is providing a way to separate the “what” and the “how” when we are formatting numbers. This statement is carrying a lot of weight. So in the “what” here we have, for example, a number and units; of meters, kilograms, or any other things that are being measured, US dollars could be one. And then separately, “how” are we formatting these things. I will get to why that’s an issue we would maybe want to address in the next slide. + +EAO: The Measure proposal is also talking about supporting mixed unit formatting, such as rather than formatting “3.5 feet”, providing a way of formatting that value as “3 feet, 6 inches”. And then, the third sort of basket of problems, shall we say, that we are looking to solve is providing unit conversion capability in ECMA262. + +EAO: To some extent, all of these are coming from desires and needs identified in other discussions and proposals, such as the Smart Units proposal, Decimal to some extent, and Intl.MessageFormat. Measure is one possible way of looking at the space of problems we have here that we would like to solve. + +EAO: A lot of what is going to continue from here is based around the proposed solution of adding Measure as a new primordial object and specifically, one that would be accepted by `Intl.NumberFormat` as a formattable value. + +EAO: That part is, in fact, the—the key of what makes this something that, I think, we ought to be defining in the spec. And that’s coming from the way that we do number formatting. Along with the other formatting operations in Intl, we have a two-phase process here. First, we have a constructor. And in the constructor, we set a bag of options that are defining how the constructor instance ought to be formatting. And then later on, once or multiple times, the formatted value is given in a format() method on this instance that we’ve created. + +EAO: So what this means is that as it’s currently set up, if we want to format currencies, for example, we need to create a separate `Intl.NumberFormat` instance for every currency that we would like to format in, even if the other aspects of how are we formatting currencies, or values with units, or values with precision, would otherwise stay the same. And this ends up mixing what we are formatting with the options of how are we looking to format this. And specifically, as alluded by SFC in the previous presentation, this becomes a problem if we consider for instance the `Intl.MessageFormat` proposal, where we have in the MessageFormat 2 specification, almost a requirement to support something like a currency or a unit as a concrete thing that can be formatted. The sample code here is showing how this could likely look, if `Intl.MesageFormat` advances in the spec. We have the pattern of a message, which includes a placeholder cost formatted as a currency, and then we have something like a Measure that we can pass in, as the cost, and that Measure, then, carries with it the currency or unit could work there as well, for, you know, when doing unit formatting. That could give us a value that can be passed through the message and formatted in a way that ensure that a translator does not “translate” the value, and localize it, which could change entirely the meaning of what is being formatted here. This is largely the problem we are looking to solve. + +EAO: The strawman proposal in a little bit more detail, allows for operations here, we can create a new measure. For example, we are starting from 180 centimetres. Then we are converting a unit here defined as foot-and-inch. And then this is what we allow to be passed to a NumberFormat instance that gives us output that says “5 ft, 11 in”, in this case. I am omitting some discussion about how exactly precision works. That is something we can consider, I think, separately. There’s a lot that I would not spend time on that topic because it’s a big topic that could swallow up the discussion completely. + +EAO: One further example of what we may consider to be in scope for Measure is this conversion to a locale where we could be defining, for example, a usage for the value. So here, in this example, we’re starting from the same starting point of having a measure of 180 centimetres, and then converting that to en-US, American person-height usage. And then, getting my height as a new measure instance. And this, then, effectively becoming foot-and-inch, which can, then, be formatted as previously, and we end up with “5 ft, 11 in”. + +EAO: As might be obvious here, a lot of this is a proposal that is to a large extent coming from an internationalization and ECMA-402 interest, why does this exist effectively? Because we do have an interest in 402 looking forward, in particular, for NumberFormatting for enabling something like “usage” to be accounted for, because it becomes very convenient to be able to format values and localize them in this way. + +EAO: But at the same time, we are very concerned about the same sort of issues that, for example, the Stable Formatting proposal considers, where if we were to introduce any capability of having an input like 180 centimetres and having output coming out of that is “5 ft, 11 in”, we end up in a situation where JavaScript developers will absolutely figure out a way of getting a “5 ft, 11 in”, even if that is only available through a complicated sequence of formatting to parts and parsing the output from there. So we are looking to ensure, in part, that this sort of capability is provided without needing to do convoluted work and abusing Intl, in order to get at the final result. + +EAO: At the last meeting, BAN represented some of the aspects of this, as well, of how we would allow for a—the `myHeight` instance here, for example, to be able to output the “5 ft, 11 in” values that would be also used for the formatting, for instance. + +EAO: It’s maybe relevant also here to note that there’s a whole bunch of things that this proposal is not about. It’s not about unit formatting, because this is already a thing that we can do with Intl.NumberFormat. It’s already supported for an explicit list of units that we say must be supported and you can’t go beyond that. + +EAO: And furthermore, it’s not even about localized unit formatting because that is already a thing. This is taking a formatting finish, the feet unit and note in particular that this is already handling some amounts of pluralization, “1 jalka”, “3,5 jalkaa”, where the units are accounting for the value and being formatted there. And then this is also not about formatting numbers with an arbitrary count of digits because we have that too, as the input given to NumberFormat gets converted internally to an Intl mathematical value that, if I remember right, has effectively arbitrary precision. Furthermore, even though we talk about currencies, we are not talking about or even considering allowing for a currency conversion to happen within Measure. And we’re not talking, within at least the scope of the Measure proposal, of considering measure as a primitive or otherwise allowing for operator overloading with it. + +EAO: But then we do have some things that this—this is the part of the proposal where I would be interested in input and comments from TG1 here. One aspect is that this proposal can be done with a very, very minimal amount of data payload addition that could be added, because we already have these units, and we don’t necessarily need to go beyond them. But we could. There’s a bunch of units that it might be interesting to have formatting be supported for, or to have conversions be supported for, but these would, then, carry additional data requirements. Should we or should we not do that? That would be interesting to do, or if there is a hard line, that would be very interesting to hear. + +EAO: Then, also, there’s the conversions that account for the locale and value-specific usage references. That’s the second example I showed. That would be very interesting to hear whether this should be considered as a part of the initial proposal or something as a possible later addition. And these are conversions like, I mentioned earlier, about converting a height to a person-height, or for other locales. And it’s important to note that the conversion also needs to account for the value of the number that we’re formatting. For example, if I remember right, the CLDR data commonly used for this, says if a person age less is less than 2 ½ years, then you end up including months in the output, but over 2 ½ years, it’s only years that are being sorted out. So the usage depends on the value and the locale. + +EAO: And this data for this is very small. Like, compressed, if you look at the CLDR data, we are taking maybe 2, 3, 4 kilobytes for this sort of capability. This is not a lot that is being asked for potentially. + +EAO: Also under consideration is whether Measure should support addition, multiplication, division, and other operators on the value. Given that we already consider and do want to support conversion to some extent, should we allow for operations that potentially would even transform the base unit of what is being worked on? + +EAO: So a lot of this is driven by this one big question, which I would appreciate input in, should we really care about anything beyond specifically formatting and conversion? Those are the requirements that this proposal at a minimum needs. But whether we should go beyond them is something that could be done, but it doesn’t need to be done. And knowing whether to—whether measure ought to go beyond is going to drive quite a bit of the considerations for how we structure it and about how we allow for it or not, something like a usage parameter, and how it interacts with the other parts. So this is where I would be very interested to hear if there’s anything in the queue or other comments or criticisms to address here. + +CDA: WH? + +WH: So … the answer to the question you have posed all depends on handling of precision, which you didn’t cover in the presentation. Because I think that’s the long pole in the tent here. Treatment of precision becomes important for doing arithmetic. And treatment of precision also becomes important when doing conversions. So do you want to do the precision-handling work in one place or do you want to do it in two places, and have them potentially get out of sync? + +EAO: I would say the precision question depends on this question that’s on the slide currently. Because if we were only caring about formatting and conversion, we can consider precision only from these points of view. However, if we also want to support, for example, operations on the value, explicitly, as a part of Measure, then precision, as you mentioned, needs to be accounted for more widely. This is why I am asking this question, because it needs to be answered first before we get into the depths of how do we handle precision. + +WH: You skipped over the precision part of the presentation. I can’t give you the answer until you present that. + +EAO: What I mean is that we do not have a ready answer for how exactly the precision ought to work because we can define it in multiple ways and I think there are—this in particular is a fundamental way that ought to be answered first before we figure out, okay. Given these are the use cases and needs that we are trying to address, therefore, what do we do about precision here? + +WH: Well, that’s the opposite of what my point is. We need to understand what’s involved in handling of precision here. And it’s hard to answer this question without a good understanding of the precision aspects of conversion. + +EAO: Okay. + +WH: What I am asking for is either a presentation or some kind of discussion of what are the considerations dealing with precision. And that would be helpful to decide whether we should care only about conversion and reinvent the wheel for doing arithmetic, or whether it’s better to consider them both at the same time. + +EAO: That does seem like a topic for consideration later. + +MM: Yeah. My question is, related, I suppose: given that measure includes some notion of precision, even without pinning down what it is, but given that, you know, the current IEEE floating point numbers and the current BigInts don’t carry a distinct notion of precision, they just identify a point on the number line, and given that the number field of a measure would also be able to carry regular IEEE floating point numbers and BigInts and add some notion of precision in this measure wrapper, SFC had raised the idea of somehow combining the trailing zeros that are being dynamically carried by a decimal number, using that in the measure context as the precision of measure. And that confuses me on two grounds: one is that, in order to deal with—so this question is sort of across both presentations taken together, so I consider a question for both of you. So this confuses me because on one hand, Measure would already need to carry its own precision in order to deal with floating point numbers and BigInts. So that would seem to carry as whatever theory of precision it would apply to decimals. And would the theory of precisions that you might think to carry in Measure, is there any use case for which the theory of precision that you would consider would be one that’s only tracking trailing zeros as opposed to tracking trailing digits? + +EAO: So I would say that if we consider precision as a utility primary for the Measure of formatting, for instance, and also directing what might happen during conversion, then it becomes sufficient, for instance, for the precision to be retained within a measured instance as an integer number of fraction digits of the value that is being then formatted. And we can theoretically, with this sort of approach, even require precision to be included as a parameter, when conversion is happening. So that we are completely externalizing what happens to precision when converting, say, from centimeters to inches and—or doing other operations like this. Does this possibly answer your question? + +MM: I think so. Let me restate and see if you agree with my restatement: that there is no anticipated use case for which the notion of precision that measure would carry dynamically would be trailing zeros, the closest is trailing digits. Two-thirds rendered with three trailing zeros is 0.6667 or something. And, therefore, there is no theory of precision that measure would want for which, if the number is decimal, it could just delegate that notion of precision to the dynamic precision information that decimal numbers carry. + +EAO: Probably yes. Because we will absolutely need to support numbers, and numbers do not carry their own precision, so the precision will be need to be somewhere, or the number will need to be converted into a Decimal, and converting the number into a Decimal to later to be converted into an Intl numerical value seems a bit too convoluted. + +MM: There’s two grounds: one is, as you said, that the precision has to be in the Measure because it applies to numbers and BigInts. And the second ground, which the second part of my question is focused on, none of the theories that precision that one would think to build into measure is a—something that keeps tracks only of trailing zeros, rather than trailing digits. + +EAO: I would agree with that. + +MM: Okay. Thank you. + +CDA: SFC? + +SFC: Yeah. I think I have the next two items on the queue. First about the precision. Trailing zeros versus trailing digits. I don’t necessarily understand why those two concepts are distinct, because, for example, let’s say you have 2 …. 2.500, which also 2.5 with 4 significant digits. Those are two different—the only difference is like you know how you represent it in the data model. But the data model is able to represent both the same concept. Right? The concept of this number 2.500. Both are able to do it. + +MM: Yeah. And so I agree with that. And I agree that you can get there by saying, either two trailing zeros or three significant digits. There’s several different ways to do it. But none of them are—trailing digits versus total significant digit, none of the coherent choice, none of the choices you make has something to lift into measure or something to use as a substitute for the precision carried by measure would be number of trailing zeros, rather than trailing of zeros or total digits. + +SFC: I still don’t understand because number of trailing zeros is also a coherent model. As is the number of fraction digits or number of significant digits. + +MM: Give me a use case for which number of trailing zeros as opposed to number of trailing digits is useful. + +SFC: They represent the same thing in the model. + +MM: I didn’t understand that. + +CDA: I want to interject because we only have a couple of minutes left. + +MM: I think we can probably further investigate this off-line. + +SFC: My initial reaction, MM, as far as I can tell, as I said, those can—the thing we want to represent is 2.500—and at the end of the day, to be able to represent the quantity is what we care about. + +MM: The context in which you want to represent 2.500, which in which the underlying number is two-thirds, you want to represent all the 6s you can. + +SFC: I think I see. I mean, we wouldn’t represent two-thirds because two-thirds is neither a decimal or a floating point binary. + +MM: I think that misses the point. + +CDA: We do need to move on. SFC, do you want to briefly, very briefly touch on your last topic. + +SFC: I think a lot of questions that EAO is asking have to do with the scope question that was the topic of my discussion. So I feel like we should continue to have these discussions and, you know, decide what the scope is going to be and that will drive a lot of these decisions and answer a lot of the questions from EAO’s presentation. + +CDA: All right. EAO, would you like to dictate key points/summary for the notes? + +### Speaker's Summary of Key Points + +EAO: The rationalization and use cases for the Measure proposal were presented along with a strawman solution. Some of the extent of the scope of the proposal was also presented, along with some of the other open questions about the extent of said scope. No clear opinions were expressed by the committee on the questions presented, but a further discussion on the representation and handling of precision, in particular, was requested. + +## Continuation: Error Stacks Structure for Stage 2 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-error-stacks) + +JHD: Okay. All right. So I don’t remember where we were at the end. I think it was DLM’s comment was the last one. + +JHD: So just—my understanding ever the—the push back from Mozilla, in particular, MAG and DLM, I believe is this seems like too much, too big, not well motivated as a big proposal. Make we could split it up. I think that in general, that is a good principle to apply. Like a good way to interrogate proposals. This proposal contains three separable pieces, I guess. One is the normative optional accessor, which like we could—ship that and say great. That accessor is great. It produces a host-defined string. Cool. The problem that solves is the one that doesn’t—that isn’t actually very convincing anymore, which is great. We have specified it. But, like, it’s not actually—I guess it prevents someone from having their own property. It’s not no value, but it’s not a lot of value to be a whole proposal. That’s almost like it needs consensus PR. + +JHD: And then the next piece would be the `system.get` stack string, wherever it lives. And the benefit there is that is—with the combination of those—the first one and that one, now the stack string can be retrieved in a way that is compatible with the desires of hardened JavaScript. There’s a brand check included in the method. That could be done, even in an environment where the stack accessor is not available. Then it can be denied in a way that is compatible with the needs of hardened JavaScript so there is some value to be had there as well, but the—typically, the desires of hardened JavaScript have been enough to motivate design changes, but, like, I also haven’t seen a lot of enthusiasm from the committee as a whole for building things just for that purpose. I am not trying to say we shouldn’t, but you know I am just concerned that perhaps that wouldn’t be seen as enough value to be a proposal. And then the third piece is, the bulk of this proposal, which is, the get stack static method which gives you the structured data. This is the one that developers want. Nobody wants to work with a string. And that’s where I think the majority of the value comes, but that isn’t very useful unless it is tied together with the contents of the string. So that you can be confident they represent each other in some way. So I don’t think that the structured data can happen in the absence of at least specifying the contents—the structure of the string in the way that this proposal does, for the accessor. I suppose we could omit the get stack string, but, like, I don’t think that’s going to be the—I don’t think that—if you are already building the structured meta data and you are already ensuring that complies with the structure and schema and shipping the accessor, I would be surprised if someone thought it was a lot of extra work to add the static method that’s basically doing the same thing the accessor is doing. I can separate it, but that feels like a bunch of overhead and a process that won’t add any value and won’t result in a different outcome. Assuming all three eventually make it. + +JHD: So I would love to hear some more evaluation about the value of splitting them up and where the difficulty lies around, like, implementing this and so on. So let’s go to the queue. + +DLM: Yeah. My topic is not addressing what you asked about. I don’t know if you want to follow up on that later. Yeah. Basically after the conversation the other day, I went back to the meeting notes from the last time that was brought to committee in 2019 and that helped me clarify my thinking a little bit about my concerns. So basically, at that time, with Adam and Domenic expressed concerns about exposing, like, the structure in order to get access to frames without standardizing the contents of the frame. I believe that would start exposing a bunch of things that are kind of non-interoperable in between the different engines. And the other thing that really stuck out was when—the SpiderMonkey team in 2019, we already tried to align our stack space with V8 and found it wasn’t possible. We were breaking internal code and extensions. And breaking code on the web. So to tie those together, unless we can standardize, not just a schema, but the actual contents, this is going to introduce more interoperable troubles and cause more trouble than it’s going to solve. The concerns raised the last time this came to committee are still valid. I share them, and like I don’t think there has really been any change since then. I am not hearing any evidence that, you know, anything around those concerns has changed in the intermediate time. + +JHD: I don’t think it’s necessarily clear that it’s a value or desirable goal to make this stack trace contents actually be the same across browsers. Like, it’s—it seems nice in theory, but I don’t know if it makes much of a difference. Anything working with stacks is already doing some stuff to work around the differences across browsers. So I—I don’t understand—like, I am not convinced—so the—one of the concerns you stated, which was stated back then as well, is that the—that it would expose information that would make or close—like, interoperable differences or create for compatible problems down the road. The people already doing this stuff, century and so on, they are already—they have already built that. And they are working with it already. So making their job easier by encoding some of the stuff in the standard doesn’t strike me as something that makes compatibility problems worse. It would prevent engines from deviating in some ways and not in other ways further. Which seems like it reduces compatibility problems + +DLM: So I think, you know, what this will do is actually make it easier for people to start things inspecting stack frames. This is actually going to increase the usage of this kind of code, which means we expose this differences to a broader audience. Like a few specialized people are doing this, working around it, that doesn’t convince me it’s a good idea to expose this to everyone on the web. + +JHD: Okay. So I understand your position better. Thank you. + +DLM: Thank you. And I sympathize; I understand why people would want this. Okay. It’s not like I think, it’s a bad idea itself. It’s just I am completely unconvinced without standardizing the concepts, exposing this more easily is going to make the world better for anyone. + +SYG: I agree with Mozilla’s concerns here. To put it another way on how we think this does not help the interop story, we have one point of non-interop today, the whole of the get stack machinery, you have to wholesale, do browsers insisting and decide what to do. It’s unlikely we can unship that. It’s beyond unlikely. We can’t just unship that. If we standardize a new thing, what happens is, there are two concerns: one is a footgun concern. It looks like it’s interoperable but it’s not. The contents are not. We got into that last time. You have to do the browser insisting and deal with the contents. The net increase now, another point of interop. We expose the stacks and the existing non-standard stack machinery that you will have, now there’s going to be a new thing that we will also have to maintain forever that is not interoperable and unlikely to ever be. Net increase in the non-interop surface, I am not interested in that. + +JHD: Just to clarify, so your concerns here and DLM’s, are those primarily about the structured form? Like, if I did the three pieces I discussed, the first two deals don’t with the structure, do those same concerns apply to the first two? + +SYG: Number 1 was, the normative optional accessor. Which is what you already have in theory. Number 2, is the static method that gets you the string and 3 is the structure. The concerns we just talked about you and DLM are about the structure part and not about the other two. + +SYG: That’s my concern was about the structure part. But I don’t see the value in the first two. + +JHD: Got it. Okay. So you don’t—those concerns don’t apply, that you don’t see the value. Just clarifying. Thank you. + +MM: Yeah. So given what SYG just said, I am going to combine this with the thing—the other thing that I put on the queue because they both address the degree of interop concerns. First, to be more ambitious, and the second to be less ambitious. The first one to be more ambitious. A possible compromise that’s still below trying to fully specify the stack, which I don’t think will ever get the engines to agree on, especially since one of the engines does something like tail recursion optimization or the others. I can’t imagine that’s going to—that that’s going to be surmountable in terms of what stack traces are produced. The ambitious compromise would be that any stack frame might be omitted, but any stack frame that is present reflects reality. So that once again, an empty stack would still be conformant, but a stack that simply claims that there’s some function on the call stack that has nothing to do with any valid interpretation of the actual call stack could be considered non-conformant. So that would be very ambitious. I am not hopeful we can get agreement on that. I am offering it in response to the idea that the structured stack trace is only something that might be agreeable, if we go beyond – + +SYG: I’m sorry. Could you repeat the last—like, 45 seconds? There was an earthquake and I zoned out. + +MM: Sure… glad you’re still there. There’s been concern that just standardizing the schema without standardizing the content would be not very useful. I think it would still be useful. But I offer the—offering the ambitious compromise, as one of the two compromises I am suggesting today, the ambitious compromise is that we go beyond just the schema to say that the—that any frame might be omitted, but any frame that is represented must be truthful, must be accurate. So, for example, you can’t produce a stack trace, a structured stack trace that claims that there’s a function on the call stack that by no semantics interpretation of the call stack is actually on the call stack. So that would, I think, be something more than schema that would be useful and potentially that is in the realm to get engines to agree on. But let me just stipulate that I find it unlikely that we would actually get engines to agree on that, even because of lots of internal ways they might be optimizing code or stacks or whatever. And that’s the part that covers everything you might have missed. Now new material the less ambitious compromise, I am going to suggest, is Jordan’s number 1. I agree with Jordan’s statement of the value of each of his three break downs, except that I want to say that just number 1, by itself, would be hugely useful to us, that number 1 by itself is just the normative optional accessor, and it doesn’t even need to be normative optional, since it would be conformant for it to return the empty string. If you want to censor it. We provide a substitute accessor that returns the empty string. Which is conformant without resting on the normative option. The thing about standardizing the accessor, as the source of the stack property is it would address what is currently a very painful, a very different, painful situation for us. Mozilla, SpiderMonkey, already conforms to the accessor, where the stack property is located, it’s a narrative accessor. And Moddable access conforms to it as well. Our shims, basically, tries as much as possible to turn JavaScript platforms into one in which the stack trace is the accessor. The two pain points for us is JS C, Safari, the stack—there’s a stack data property on error instances that are—are produced on error instances before we can intervene. We don’t have any hook to intervene. And, therefore, we have no hook to be able to sensor information of the call stack. The revelation of that, you know, implementation—the spooky action at a distance, of seeing what should be encapsulated information in the call stack model above that. We do not have a way to censor that on JSC. And the much more telling stake that V8 made and and the stake from our point of view, we had a long discussion about this, on GitHub threads, public and private, with Shu, but the end result of those is that V8 recently, without realizing the damage it would cause, added a own accessor property to error instances, where all of the own accessors have the same get—sorry: that’s probably the same… + +SYG: Yeah. It’s a tsunami from the earthquake + +MM: Sorry about that. And I am very impressed there is V8 but an own accessors property on error instances where the getters and the setters are the same function, and therefore, the per error instance it was using information they must be accessing are hidden internal state so it would have been, and this was agreed to on the thread, that would have been and would still be easy for V8 to change that, to be an inherited accessor, and it’s simply the case that right now, it’s—there’s no basis for motivating the V8 to make the change. + +MM: If it was an inherited accessor across all engines, then it would give us one way without virtualize the—to censor the visibility of the stack, and then the issue about virtualizing it, in the absence of the other parts of this proposal, would still, perhaps, be a lot of sniffing and the platform stuff. The major need is the censoring. Because right you on V8, they have created not just an unplugable communications channel for data, the accessor properties will allow you for the communication of object references through the hid general state because the setter is honoured and it does not require the argument to be a string. So that’s a capability leak. That we cannot plug because of this set of decisions that V8 made. And it would be easy for V8 to change to this common behavior if we could agree to that. So if there’s—you know, if part 1 of this is something the committee could agree to, I would be very happy to separate it out and try to push that through to consensus and let the remainder remain in a distinct proposal. + +CDA: Noting we have less than 10 minutes for this topic. + +DLM: I have two quick replies to what MM said. First of all, I wanted to clarify our position about a schema without specifying the contents, we are not saying it’s not useful. We are saying it’s harmful because we’re concerned about interop problems. In the other one, we would be happy to see some specification of the accessor because this is causing web compat problems for us. + +SYG: So to MM—it sounds like you would specifically like V8 to change our existing non-standard API, which we have discussed. I would like to point out that this is not a direct outcome of standardizing a new thing. Like, if you standardize this something, this stack getter, like a very likely outcome is we have that *and* our thing. It’s not now, you standardize a thing that kind of sort of overlaps with a non-standard thing and we unship this. These are independent outcomes. + +MM: My understanding from the threads, that—on GitHub that you and I engaged in both public and private, are that the—is that if there was the accessor property on error.prototype, that was inherited by error instances, that there would be no reason for new Error instances that were created and thrown by the engine to carry own stack accessor properties that simply have the same getter and setter to them because the ones that they would inherit would access the same internal state. + +SYG: That’s correct. But the outcome—like to get to that place, the investigation needed is, like, what is the risk of doing that? It’s not just standard vs. non-standard. + +SYG: It is just independent of whether it is a standard thing. + +MM: Certainly, any change to, you know, browser-specific API, in order to conform with cross-browser agreement is a danger to that browser and the users of that browser. And, yes, I will acknowledge that and, yes, it would be for this something to make it to Stage 3, would certainly require, you know, buying in to at least do the experiment and see if there’s any interop risk. In the case of—not by the way, the security problem that we’re concerned with. What we need here only has to do with the pre-endowment of the error own accessor on platform generated errors. It has nothing to do with whether capturing a stack trace stamps error stack own properties on to errors and non-errors because we can censor capture stack trace. It’s only the pre-endowed accessor + +JHD: To clarify, in general, correct, standardizing a thing cannot force an engine to unship a non-standard thing. And the rubric is based on many things, but breakage and not about the simply the fact of being standard or not. In this specific case, it’s likely if we shipped an error prototype, that V8 would do it, but that’s not a guarantee. That accurate? + +MM: That’s correct and that kind of investigation is appropriate after, you know, to happen, you know, at least during Stage 2, if not later. It’s an implementor feedback. It’s one that might volume the same kind of counters that you have done for the fixed versus non-extensible. You know, it’s an investigation to see what the – + +SYG: Let me be frank. We haven’t done this investigation because we don’t think it’s high priority. And you don’t get to force that high priority by making it a proposal. + +MM: Okay. I understand that. Would that be an objection to this proposal sectioned off from the error stack proposal proceeding through the early stages of a process so that we can continue this discussion and possibly cajole V8 into trying the experiment? + +SYG: Are you asking if this part is being split off to continue the discussion? + +MM: Yes. + +SYG: I do not object to it being split off. + +JHD: Okay. So it sounds like just to summarize what I have heard, so I can update the proposal with the current status: there remains concerns that any form of standardizing the schema that does not account for the contenters, whether it standardizes them is not the issue, but accounts for those issues, Mozilla and V8, at least, consider that would be harmful. Even though a lot of other folks think it would be useful, that’s the constraint there. There is intrinsic value, it seems, in shipping the stack accessor by itself, where the only requirement is that it return a string. So what I think that I—I will talk over with MM, but I am suggesting that happen is, I rename the current proposal to be like about the structure, and then I make a new proposal that is just for the stack accessor and try to advance that. And figure out what to do with the structure separately. Does that seem like a viable plan for now? Or does anyone have a reason for why that’s not a viable plan for now? + +JHD: Feel free to reach out, outside of plenary. I just wanted to get the opportunity to get in the notes, if anybody has a reaction. + +MM: Obviously, I support that plan, and I would volunteer to be a cochampion on both. + +JHD: Okay. Well, then, I will plan to come back at a future meeting, request Stage 1 or beyond for accessor, and I will update the README of the current proposal to indicate what those concerns are, and how we might need to address them and proceed from there. + +## Continuation: import defer updates + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-defer-import-eval/) +- [slides](https://docs.google.com/presentation/d/1yFbqn6px5rIwAVjBbXgrYgql1L90tKPTWZq2A5D6f5Q/) + +NRO: Okay. Yeah. Hello, everybody. We are continuing from the discussion started on Tuesday about import defer. On Tuesday we had different proposed and there’s one which didn’t concluded about specifically, just like recap from the proposal currently does is that D well, there are some evaluation triggers when happening the models. Whenever you perform a get operation, module, symbols. It will trigger evaluation. This means that operations like foo in the namespace does not trigger evaluation because it doesn’t go through the Get internal method of the deferred namespace object. Operations like `object.key` triggers evaluation in time. Specifically, because `Object.keys` calls get when there is some key. And operations like `Object.getOwnPropertyNames` or—well, I guess `Object.getOwnPropertyNames` does not trigger evaluation because it doesn’t trigger get. There are other ways to get objects. There are a bunch of internal object methods. + +NRO: The proposed change is to align all of these things and to make all of them always trigger evaluation. So that the rule would become—when you try to get some information about the export of the model, you are triggering evaluation. There are some arguments in favour and against the change. The argument in favour is that this change would simplify what tools to implement, making it possible for the tools to implement the semantics of the proposal. And the reason I am expressing this is because decide native browser, a lot of the time, ESM gets transpiled or bundled to the problem in the browser environments. If one way we have the model declaration proposal, bundler would meet—so use the ESM as defined by—as implemented by the browser. The argument against this change is that it removes some abilities we are giving to JavaScript users right now with the proposal. That is, to list the export model without triggering evaluation. This change is entirely driven by the needs of tools, and not of any spec constraint or any constraint coming from JavaScript engines. + +NRO: And the counterpoint to the argument is that, well, we can still introduce a way to get a list of exports in a module, in a way tools would have needed to implement in some different way probably. But it was part of the ESM phase imports where we have the static import capabilities. And it’s now been split out and deferred when we do—we continue with the other virtualization proposals, but it could still come in the future. + +NRO: So we ended up with discussions last time, and arguments, and at the time asked for a temperature check. So I would like to—if anybody has further thoughts, other than the four people expressed, you are welcome to get in the queue. Otherwise, I would ask CDA to prepare the poll with this question. Like, how do you feel about this change? Specifically about changing the evaluation trigger to be whenever you are querying about the exports of a model. So just in the list of exports or checking whether an export exists my personal preference is to do this change, but let’s have the poll. + +CDA: All right. Nothing on—MM supports. Nothing else on the queue. So for temperature check: in order to participate, you need to have TCQ open before we bring up the interface. Once it’s up, if you join after, you will not see it. So if you have—if everybody—if you don’t have TCQ open, please open it up. I will give you 10 or 15 seconds. Or shout out if you need more time to open it up. Otherwise… All right. We will bring up the temp check. + +NRO: Okay. So I think some people are actually missing, because I know at least GB would have voted unconvinced—but considering that, I think the—these results are giving me a direction. Is GB in the call? + +AKI: Point of order, do you have to have the TCQ window active in addition to be open because I think that my tab was in the backed and the temperature check never showed up. + +CDA: Yeah, it depends on your browser. If your tab was inactive for long enough and the browser does any form of, like, memory optimization. And then that feature would have prevented that from coming up. You could indicate—is that—you want to see the results + +CDA: 3 strong positive. 9 positive. 3 following. 1 indifferent. And everything else is zeros. + +NRO: Okay. So I would like official consensus for this change. Given that GB is not here, I want to read a message that GB sent to me: “I want to be sure and clear about decisions made, as long as we are clear in making these tradeoffs, the committee can decide to make them, but let’s have a discussion openly.” And the previous slide about the tradeoffs was reviewed by GB. So I am just going to assume that GB would have been fine with the conclusion, given the temperature poll and ask, does anybody object to making this change? + +CDA: Nothing on the queue. + +NRO: Okay. Thank you. Then, we have consensus. + +### Speaker's Summary of Key Points + +NRO: The summary for the notes, including the discussion from Tuesday, is that we presented four changes to the proposal. The first was presented, the same one we conclude today, was about changing when evaluation of the deferred model happens to happen whenever we not only read the value of the exports, but also when we read the exports of the model. This change got consensus. The second change was in response to a problem, when it comes to the dynamic form of import defer and with the behavior of promises by reading them would trigger execution. And the change was to make sure that deferred module namespaces never have a `then` property, regardless of what the module exports have not. And it does not read the contents of the model. That change also got consensus. There was a third change, about changing the value of the toStringTag symbol from model to deferred module, and deferred module namespaces and that changes also got consensus there was a fourth change, adding a symbol evaluate property to deferred module namespaces, whether reading properties from it would trigger execution or not. Given the feedback that was—generally it seems supportive of the idea, but not in the shape and especially given that the stabilized proposal is in a very similar area, we did—I did not ask for consensus on this change. The first three changes are in and the fourth one is not. And I think this is it. + +## Adjournment + +CDA: With that, that is the end of this meeting! Thanks to everyone, big special thanks to everyone who helped with the notes. + +AKI: Don’t forget, if you want a hat for your contributions to note-taking, you need to make sure to contact me, so I know to make it. + +MM: I need reviewers for immutable ArrayBuffers which got to Stage 2. SYG and WM, I think, that you had privately or in previous structs meeting expressed interest in being a reviewer? + +SYG: I will confirm, I will review + +JHD: I am happy to also review it + +WM: Yes. + +MM: Excellent. So I have got three reviewers. Thank you very much. + +CDA: Great. We did get reviewers for upsert/map-emplace. DLM? + +DLM: That’s correct. + +CDA: Okay. I just got paranoid about any other ones we missed. Okay. Great. Thanks, everyone. diff --git a/meetings/2025-02/february-18.md b/meetings/2025-02/february-18.md new file mode 100644 index 00000000..ccbb5aeb --- /dev/null +++ b/meetings/2025-02/february-18.md @@ -0,0 +1,1554 @@ +# 106th TC39 Meeting | 18 February 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Kevin Gibbons | KG | F5 | +| Keith Miller | KM | Apple Inc | +| Oliver Medhurst | OMT | Invited Expert | +| Dmitry Makhnev | DJM | JetBrains | +| Gus Caplan | GCL | Deno Land Inc | +| Daniel Ehrenberg | DE | Bloomberg | +| Jesse Alama | JMN | Igalia | +| Michael Saboff | MLS | Apple Inc | +| Ujjwal Sharma | USA | Igalia | +| Ashley Claymore | ACE | Bloomberg | +| Nicolò Ribaudo | NRO | Igalia | +| Philip Chimento | PFC | Igalia | +| Michael Ficarra | MF | F5 | +| Linus Groh | LGH | Bloomberg | +| Samina Husain | SHN | Ecma | +| Ron Buckton | RBN | Microsoft | +| Kris Kowal | KKL | Agoric | +| Mikhail Barash | MBH | Univ. of Bergen | +| Daniel Minor | DLM | Mozilla | +| Aki Rose Braun | AKI | Ecma International | +| Luis Pardo | LFP | Microsoft | +| Chip Morningstar | CM | Consensys | +| Eemeli Aro | EAO | Mozilla | +| Ben Lickly | BLY | Google | +| Mathieu Hofman | MAH | Agoric | +| Sergey Rubanov | SRV | Invited Expert | +| Chris de Almeida | CDA | IBM | +| Luca Casonato | LCA | Deno | +| Istvan Sebestyen | IS | Ecma International | +| Waldemar Horwat | WH | Invited Expert | +| Richard Gibson | RGN | Agoric | +| Shane F Carr | SFC | Google | +| Erik Marks | REK | Consensys | +| Justin Grant | JGT | Invited Expert | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +RPR: Thanks everyone for coming to Seattle, we’re ready to begin the 106th meeting at TC39. + +## Secretariat comments + +Presenter: Samina Husain (SHN) + +* [slides](https://github.com/tc39/agendas/blob/main/2025/tc39-2025-005.pdf) + +SHN: I was not able to attend your in-person meeting in Tokyo. It’s nice to see everybody again. A lot has happened within Ecma in the last couple of months. So let me just give you a short report. This is the overview of what we have done within the secretariat and different activities and just going to highlight some things on source map and new TCs we have and just a reminder on the executive committee meeting date and deadlines that you should all be aware of and then some general collaborations that we continue to have. + +SHN: So first congratulations for the first addition of source maps. Yes, there you are NRO. Congratulations. Good work. Really hard work. I mean, it was brilliant that you did this in the year. So we’re looking forward to the next edition. It is published and was approved at the December GA and want to bring that to everybody’s attention if you were not already aware. + +RPR: Round of applause. + +SHN: Also want to highlight TC55, a lot of work has gone into that. I want to thank everybody for that work. The chairs are Luca and Andreu, and Aki is supporting you and I think you will have the official meetings. I know there’s some things that need to be taken care of administratively from Ecma perspective. This is brilliant. Last year 2024 was the year that we had two new TCs and this is one of them. We’re very happy for that. That was approved at the December GA. If there are individuals that want to be involved and organizations that would be great. + +SHN: The other TC that is TC56 is the first that touches on artificial intelligence, I think this is very good for ECMA to come into a new space. The members are involved here not only IBM but ServiceNow has shown interest and Microsoft shown interest and Hitachi shown interest that are current members and university that are active not for profit members and we have a couple of invited experts and looking to have them transition to be membered and mainly Cisco being one of them. I think that is excellent. This also enables us to be more visible in the AI space. The AI Alliance has a number of individuals involved in the meeting. Hopefully this grows not as this particular TC but other TCs in the area of AI which of course as everyone knows is a dynamic and constantly changing topic. So thank you for the committee that worked on this also. It was approved in December. + +SHN: Also from the Ecma new management for 2025, we had the final votes in December and first I want to thank all of the people who did nominate and were nominated for this. We had seven members who are nominated for non-ordinary member positions. We had only four positions. It’s excellent to see the interest. I hope your interest still remains and that you are still active and want to consider to submit your name as we do these elections every year. This is the final slate that was selected. So DE who is sitting right here in the room is our new President of Ecma. And Jochen Friedrich was previous president and vice president and treasurer is Luoming Zhang and important to note that management roles are for ordinary members. We only have six ordinary members. We are looking forward to their support and driving Ecma. + +SHN: From the Executive Committee (ExeCom), this is really exciting and we have new members and many are sitting here. Jochen will chair and we have Theresa O’Connor from Apple and Chris Wilson from Google and Mikhail Barash (University of Bergen) and Patrick Dwyer (ServiceNow) and I don’t know if some are online. It’s great that we have new thoughts and new discussions. Peter Hoddie (Moddable) has been on it before and also important to remember all of the relevance and the past information and leverage on that. So it’s great that Peter is on. And Ross Kirsling (Sony) is also on it. Maybe Ross is also online. This is very new and dynamic executive committee. We will have our first meeting coming up in March. So again thank you everybody for your interest. And congratulations to everyone that was appointed. You may applaud since Daniel was here. + +SHN: Just a few general items and the important one, this year ExeCom is earlier usually it is middle or end of April. It is in March. If you are, as I assume, going to have addition 16 and 12, the different standards, I need to have an indication of that. We need to bring that to the attention of the ExeCom that is actually in two weeks. I can make that update immediately. But I just want to get information from the committee that both those two additions are your intentions for the GA in June. You have plenty of time for the opt-out, but I do need indication that you want it. And then maybe the editors can let me know that. The committee can let me know that before this meeting is over. Perhaps if you can by the end of the day, I would appreciate that. + +SHN: We have talked about liaisons before. Ecma has a number of liaisons that we keep active like the JTC 1 and W3C and we have had historical with IETF and in the past there were people of IETF coming to Ecma and TC39 meetings. Over time, there hasn’t been a lot of cross-contribution. I would not want to see the liaison disappear. To build a new liaison is always more complicated. I would like to keep it going. I’m asking if there’s somebody on the committee that could be the point of contact between TC39 and IETF, I would appreciate that. If there is no topic or nobody of interest, I also need to know. I need to figure out how to maintain that from the secretariat. I don’t want to drop this. I suspect as we move forward with new topic and areas of IETF is doing we should have visibility of what is going on to make sure it’s not impacted any of the work that the committee is doing. Please give it thought and approach me with who the individual can be. + +SHN: We have a strong invited experts list together with AKI and the chairs we’re always reviewing them. We had an extremely busy end of last year. I have had an extremely busy last few months of the end of the year were a bit of a blur if I look back with the activities. I have been very flexible. So the invited experts that typically would have ended their term at the end of 2024, I have not stopped and they will continue on and because we appreciate everybody’s contribution and would like to review them at the end of the year. Sometime in the next little while I will be approaching organizations and members that are related to the organization that are invited experts if they are considered to join. Please be aware to get an email from me. It is a touch base just to inquire and see how your organization is looking at Ecma and if they would like to join because ideally that is the way we want to move forward. So continue on with your work and together with AKI we will work with the chairs to make sure that the list remains accurate. + +SHN: With W3C we have done the transition of Winter CG to the generally used term WinterTC and new TC55 that is formed at Ecma. In doing so, I had a number of conversation with W3C leadership and also have met with their CEO Seth Dobbs and they’re keen to see how Ecma can be more engaged with W3C and I know there are already members representative here that are active in different spaces of W3C. Those are the four bullets I brought up in a previous meeting. I would like indication on the **horizontal review** is the one that is the lowest hanging fruit. Does it in any way impact us? I would like guidance. It is important. I want feedback on that. I’m looking at Shane right cross from me. If there are other topics that are important, reach out to me and AKI and the relationship with W3C after the moving of TC55 opens up more opportunities for more collaboration. There may be other projects going on in W3C to come to Ecma. There are common members and eventual invited experts. W3C has a broad scope and a certain focus. Ecma has a value it can bring. If there are topics, we have the opportunity to continue the strong relationship here. We have the liaison contact Michael Smith. I and AKI are active with W3C and have a strong relationship and make sure we are active with W3C and TC55. + +DE: Great presentation. A couple comments that I wanted to add. Just to emphasize Winter TC or TC55 the significance of this it’s about standardizing an initially a subset of the web platform that’s supported on web interoperable server environments such as hopefully Node.js and Deno and complementary of TC39 work and scope of things and certain things we thought about in committee that ended up getting web specs for. Hopefully this can help regularize in the ecosystem some of the core concepts. I hope more TC39 members will be interested in joining this group. Development has switched completely to Ecma unlike previous plans where the CG and the TC would live side by side. And thanks LCA and ABO and OMT for the leadership. For the IETF leadership, role, although the IETF have a broad scope and it can be intimidating to be liaison for the whole organization, there’s starter tasks that people could work on. For example, we have certain line types with Ecma that are through registered through IETF such as CycloneDX and small tasks and coordinate to register to the cycle NDX types to point to the new version of the standard. There’s been successful interaction of IETF and TC39 such as making the new version of the datetime format standardized and I hope people consider taking on this extra role. + +SHN: Thank you DE. And give applause for the TC55 team. Thank you. I’ll go quickly through the annex slides and then open for any other questions. The annex slides as you see are up loaded. Also reminding everybody of the invited expert and the role on the TC and reminding everybody on the code of conduct which is very important as already noted by RPR on how we work and how we exchange our interaction and work together and collaborate. As always the summary and conclusion are extremely important. It has been an improvement and I thank you very much for all of that and hope to continue to build on that. AKI does the minutes and then I finalize them. So it’s really helpful for us to have these and for everybody else. + +SHN: There are a number of documents. I listed them. There’s a lot of documents. If you’re interested and you see by the title of the document that you would like to know more about it, you may ask the TC39 chairs to access them for you or ask AKI. These are the things that taken place since the December meeting. There’s quite a number of things. I noted in the earlier slides and let you run through those. If there’s anything you want more information, I’m happy to share them. There’s a huge list. The meeting dates as reminder we have the next one coming up that will be virtual. I did statistics last year. You all had average of over 75 participations in meeting whether hybrid or online which is excellent. You’re the largest committee and work actively. And of course every year you have the new additions. So it’s quite dynamic. + +SHN: Our dates just as said, what we have coming up I have noted in red that are ExeCom date is 5th and 6th of March. 5th is the main date. There will be secretariate and your chairs that report at the meeting. I know about your new additions before that time and you have plenty of time for the opt-out. The GA is later days of June and if you work backwards you have time for the opt-out. That’s slides and information from the secretariate. I want to thank everybody for supporting AKI and what she’s doing for TC39 and other technical committees that we have. So please keep supporting AKI and her questions and support to you. + +RPR: Thank you for an excellent report. + +### Speaker's Summary of Key Points + +Update on recent ECMA activities, including the approval of new Technical Committees (TCs), organizational changes, and upcoming deadlines was provided. + +#### Key Points + +* The first edition of Source Maps was successfully published and approved at the December 2024 + * Two new TCs approved in 2024, marking a significant milestone for ECMA. + * TC55: Focuses on standardizing a subset of web platform APIs for use in server environments + * TC56: The first ECMA TC focused on AI. +* ECMA’s Management and New Executive Committee for 2025 was announced. Elections were finalized in December 2024 and the first Executive Committee (EXECOM) meeting is scheduled for March 2025. +* Important updates regarding Edition 16 and Edition 12 of existing standards need to be submitted before the EXECOM meeting. +* Maintaining and expanding ECMA liaisons, the liaison role between TC39 and IETF needs to be maintained. Volunteers are needed to act as a contact person between the two organizations. +* Invited Experts, the list of invited experts has been extended beyond December 2024. Organizations that have invited experts should consider transitioning them into formal ECMA members. +* Reminders of meeting minutes and summaries have significantly improved, thanks to Aki, the chairs and all the participants. + +## TC39 Chair Election + +RPR: So CDA may have an intro to say and AKI can be helping in the room and conduct things when the rest of us step out. + +CDA: As many folks are aware, we really only do an election when we have a change in the roster. And so that’s what we’re going to be talking about today. Next slide, please. So this is the full roster of folks beyond all of our esteemed delegates and invited experts, you can see we have the chairs and facilitators and convenors of task groups and editors and administrator and secretaries. The only thing that is changing is the chair group. The chairs themselves are unchanged. But we are having a couple of facilitators formally stepping down. So that would be BT and YSV whom we very much appreciate their help as facilitators and as chairs previously. And they will be—we will be looking to add to the facilitators. DLM and DRR have volunteered to help us out. The delta is the individuals and the same Ecma members are still represented. That’s nice to have continuity there. At this point we are going to step out to let the committee do its thing. So that’s myself, RPR, USA, DLM, and DRR. + +_Notes paused during discussion on new chair group_ + +AKI: We’re on the record as having consensus. + +SHN: You have consensus. I do, of course, have to ask—do you accept the role? Do you accept to continue? + +RPR: Yes, accept gladly. + +SHN: It is relevant to ask the question even though you are voted in, you all accept your role? Is that the same for you Chris? + +CDA: Yes. + +SHN: Thank you. If anybody who has been appointed doesn’t wish to accept the role, they should speak out. Congratulations. + +RPR: Very thankful to BT and YSV for serving for many years as facilitators. + +SHN: Ecma Secretariat will take the action to recognize the work of both BT and YSV. + +### Conclusion + +* The proposed chairs and facilitators group has been elected by acclamation: +* Chairs: Rob Palmer, Ujjwal Sharma, Chris de Almeida +* Facilitators: Daniel Minor, Daniel Rosenwasser, Justin Ridgewell + +## ECMA-262 Update + +Presenter: Kevin Gibbons (KG) + +* [spec](https://github.com/tc39/ecma262) +* [slides](https://docs.google.com/presentation/d/1jgEaNaq6W7hZSKQILZ1F2sC1jTjRKwwnuqCGgi6iyQc/) + +KG: Now that we have done the election, we move on to KG with the 262 status update. So the update, there’s a few normative changes. The last two haven’t. RegExp modifiers landed and import attributes and JSON modules did land. And apologize for the delay and should be landed today or tomorrow. No significant editorial changes because not had as much time as we like. But there have been a number of manual ones. None of which we put here. Approximately the same list of upcoming work, what we started to chip away at bits and pieces here. And then of course the most important thing as mentioned it’s a new year and time to prep a new addition of the specification. We intend to freeze the specification meaning no further normative changes except possibly any bug fixes and we’ll go in after the end of the meeting after we land the things that are going for Stage 4 or I believe there’s a normative PR although the normative PR we won’t land because it requires implementation. Never mind. The Stage 4 proposals, any or all of the Stage 4 proposals that are proposals which are attempting to achieve Stage 4 will be landed before we freeze the specification. But then that will hopefully happen by the end of the meeting and we will get everything in and all tied up. We will post the link to the reflector of the candidate specification. At that point the IPR will begin. Watch for that link. But expect it to happen approximately Thursday. That’s all I have. + +RPR: Just checking for anything on the queue. No questions on the queue. Give it a moment in case anyone wants to say anything. Thank you KG. + +NRO: I have a question. Didn’t get on the queue in time. The definition that you are going to remove from the spec, what is the reason? Because I was just going to add to that the source map spec. + +KG: So the problem is that that defines something like 3% of the terms and definitions in the specification. Most terms are defined closer to where they’re used or in some relevant section. And just the sort of mish-mash of random stuff in there. And to the extent possible, we thought it would make more sense to consistently put terms closer to where they’re used. + +NRO: Okay. + +MF: I will note that Ecma specifications are expected to have a Terms and Definitions section but that is one of the places where we have chosen to diverge. + +RPR: Thank you. And so Kevin, you’ll write up a summary in the notes? + +KG: Yes. + +### Speaker's Summary of Key Points + +Normative changes since last time: regexp modifiers, import attributes, JSON modules candidate ES2025 spec will be cut at the end of this meeting, to include any proposals which get stage 4 at this meeting. + +## ECMA-402 Update + +Presenter: Ujjwal Sharma (USA) + +* [spec](https://github.com/tc39/ecma402) +* [slides](https://notes.igalia.com/p/q98gbOaS6) + +USA: Good morning everyone over there in Seattle. I hope you’re having fun. I’m going to similarly talk about what’s happening on 402 briefly. We have a few normative changes that are in the works. The first one is a relatively old one. This is basically everything that we have ongoing at the moment for the first one this is a normative note that was requested by a previous TG1 and they’re still soliciting feedback on this. For the next one we have new numbering systems by FYT and this has been improved by TG2 and should come to TG1 soon that will up grade for the 16. Next we have another normative pull request by RGN which has been sort of being discussed in TG1 at this moment. But there’s no agreement yet. And then we have three new ones, so expect to see them soon to TG1. But that’s all the normative changes we have. The last one especially being uncovered by Test262. That’s nice. + +USA: For the editorial changes, the first one is sort of rearranging the spec more consistently that’s already been merged and then we have two more editorial pull requests at this moment. Apart from that, we also have to merge a meta change by AKI that helps us generate better PDFs, but it’s currently blocked by a change to ecmarkup. + +USA: Similarly to ECMA-262, we plan to freeze the spec soon including the Stage 4 proposal DurationFormat. We plan to do it at end of week and start the IPR opt-out before the next meeting same as 262. And that’s all. + +RPR: Thank you. Currently there is no one on the queue. Would anyone like to ask questions? All right, then, thank you Ujjwal. + +### Speaker's Summary of Key Points + +Ongoing changes to the ECMA-402 spec were discussed and USA announced plans to freeze the spec at the end of the week. + +## ECMA-404 Update + +Presenter: Chip Morningstar (CM) + +* [spec](https://ecma-international.org/publications-and-standards/standards/ecma-404/) +* no slides presented + +CM: JSON is kind of like conditioner—it helps keep your data soft and manageable. And ECMA-404 is like the label on the package—the sort of classic timeless unchanging verbiage that everybody has come to expect and appreciate. + +RPR: Thank you for that product analogy. I think that’s short and sweet. It doesn’t even need a summary. + +## Test262 Update + +Presenter: Philip Chimento (PFC) + +* [repo](https://github.com/tc39/test262) +* no slides presented + +PFC: I just have a list of points to deliver verbally. I don’t have slides if that’s all right. Since the last plenary meeting we have a few updates from Test262. We have merged tests for iterator helpers and we deferred imports now. We have a number of maintenance updates based on feedback from limitations. We also merged a test suite into the staging folder from SpiderMonkey. This is the first time that we have done something like this but these are tests that previously lived only in the SpiderMonkey code base that they ran in their Firefox testing in addition to Test262 but they weren’t specifically to SpiderMonkey and could be used for under implementations. So we merged this whole batch of tests which are now available for everybody to run. Kind of similar to what V8 is doing with their two-way sync. So look for more work of that kind in the future. Then I have some less good news. Igalia work is less than previously because we were funded by a grant that is finished. Any help from proposal champions in reviewing tests for proposals is very much appreciated because as a whole, the maintainers group has a bit less time for Test262 than we had before. Then I have some exciting news: SFC who is in the room with us is working with students at the University of Bergen, Norway in the upcoming semester and some are working with Test262. If you have projects for Test262 get in touch with SFC or me or JHD. And we would love to hear your ideas. But that’s it for me. I will paste the summary into the notes. + +SFC: Also MBH is the main contact with the university of Norway. He’s a great person to speak to if you have ideas for those contributions. + +NRO: As champions, not just champions but the problem I have seen is the champions write the tests but they need someone to review them. If you’re familiar for any reason with the proposal 262 somewhere or some browser even if you’re not the champion, having more people reviewing would be a great help. + +OMT: Just going to say that some of the SpiderMonkey tests are very heavy and can crash engines so we disabled them on test262.fyi. + +PFC: I remember hearing that. That is something that we should look into whether it’s changing those tests so they’re not quite so resource-heavy or having a slow flag that test runners can skip. + +### Summary + +* We have tests for Iterator helpers and/or iterator sequencing +* We have tests for deferred imports +* Various maintenance and updates based on implementations +* We have merged a test suite into staging from SpiderMonkey. These are tests that previously lived in the SpiderMonkey codebase that were in addition to test262, but were not SpiderMonkey-specific and so could be useful for other implementations. +* Igalia's involvement is less than previously, because our grant finished +* SFC is working with students from U of Bergen in the upcoming semester and some will be working with test262. If you have ideas for student projects, get in touch with us or SFC or MBH. + +## TG3 Report + +Presenter: Chris de Almeida (CDA) + +* [site](https://ecma-international.org/task-groups/tc39-tg3/) +* no slides presented + +CDA: TG3 continues to meet weekly focused—we’ve only been talking about security impact of proposals and various stages. So, yeah, that’s it. Please join us if you are interested in security. + +## TG4 Report + +Presenter: Nicolo Ribaudo (NRO) + +* [site](https://ecma-international.org/task-groups/tc39-tg4/) +* [slides](https://docs.google.com/presentation/d/1-suKLKywflKUDzTqVBxl-dEI2bJSfG5dl205BRtVCK4/) + +NRO: I have slides. TG4 part as SHN said before we have the first edition of the spec published, thanks to everybody who helped us get this done. We have some plan submitted changes to the scope and most implemented from the bikeshed ecmarkup and push is one of the mange proposals we were working on needs to define how to parse some strings in the syntax within source map strings and it’s just easier and matches with the bikeshed. And then also the same with some of the existing parsing that we have. For example, parsing mapping is collaborating with the actual grammar. And because we link with concepts and makes it easier. But it makes it harder toiling to web concepts. Also even though this is not good motivation but means it’s not anymore to figure out how to get bikeshed to convert nicely to the Ecma PDF format. + +NRO: And the proposal scopes. It’s going well. We keep having monthly meeting about it, the champions started writing spec text. If you’re interested in it, Simon (SZD?) and JRL from Google did analysis of trade-offs of scope information about size and accessibility. So you can go check it in the repository. And this is it. Everybody is always welcome to join our meetings. Let me know if you need help getting involved. + +### Summary + +* ECMA-426, 1st edition approved by the Ecma GA +* The TG is in the process of converting the specification from bikeshed to markup +* Work on the scopes proposal is proceeding well + +## TG5 Report + +Presenter: Mikhail Barash (MBH) + +* [site](https://ecma-international.org/task-groups/tc39-tg5/) +* [slides](https://docs.google.com/presentation/d/1jLeg1TuaD1l535LF_gf4dJaF7sz-Z10Gm5cXbmonHnk/) + +MBH: TG5 was chartered about a year ago. We have since then had nine meetings almost monthly. 10 to 15 attendees. And we also have TG5 workshops that are in person or hybrid meetings. And one of workshops will be this Friday. So examples of topics that we discussed are here on the slide. Friday workshop will be a presentation by a research group at the University of California San Diego about the messageformat study and identify more proposals that could benefit. But we try to look into other directions where academic results can be brought in for the work of the committee. And plans for 2025 in particular with establishing new collaboration, so IETF there is research and analysis of standard-setting processes research group and W3C is a process community group. We want to establish some collaboration with them and arrange a break-out room at TPAC 2025 to try to engage more universities in the web standard work. Related to this there will be a workshop on programming language standardization at the European Conference on Object-Oriented Programming this July. That’s it. + +RPR: Excellent. Ashley. + +ACE: Can you please link to the slides. + +RPR: Nothing more on the queue. Please summarize that for the notes. Next up it’s back to Chris with updates from the code of conduct committee. + +### Summary + +TG5 has had regular monthly meetings since it was chartered one year ago. In addition, TG5 has arranged three Workshops co-located with hybrid meetings, and currently plans another Workshop in Spain this May. TG5 intends to establish contact with IETF [Research and Analysis of Standard-Setting Processes Research Group](https://datatracker.ietf.org/rg/rasprg/about/) and [W3C Process Community Group](https://www.w3.org/community/w3process/). + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +* [site](https://tc39.es/code-of-conduct/#code-of-conduct-committee) +* no slides + +CDA: Pretty quiet on the code of conduct front. We don’t have any new reports or anything we’ve had to deal with. I think we got like a weird AI like generated report that didn’t really make any sense and so we just ignored it. But other than that, that’s it. As always, anyone interested in joining the code of conduct committee can reach out to one of us. Thank you. + +RPR: Thank you for protecting us from the bots. Next up, we have GCL with don’t call well known symbol methods for RegExp on primitive values. + +## Don't call well-known Symbol methods for RegExp on primitive values + +Presenter: Gus Caplan (GCL) + +* [spec pr](https://github.com/tc39/ecma262/pull/3009) +* no slides + +GCL: This is a pretty small change. Basically for some background here, Node.js and Deno write a significant amount of their implementation in JavaScript. So one of the things they do is attempt to harden the JavaScript that they use so that user code cannot break their implementation as it runs. So this specific needs consensus change has to do with—basically there are five or six methods on `String.prototype` (match, matchAll, replace, some of these) that accepts a parameter which when—well, we can look at the text for this. Basically it accepts this RegExp parameter and then if it is not undefined or null, it will attempt to look up `Symbol.match` on it and call that. Otherwise, it will create a regular expression and invoke the normal matching function on that. And so there are match, replace, replaceAll, search, split. All of these functions do that with their respective symbols. + +GCL: Basically what this change is proposing is that when you call these methods with the argument with any primitive but in practice with a string, we should not read the symbol off of that, because it can interfere with the internals. + +GCL: So that’s the background there. We have a little bit—this is from a little bit ago. But we did here that core-js never implemented. They implemented it the way it was in the pull request and nobody ever complained. That seems positive. We can go here. + +JHD: I think this is great. There’s no reason any of us could ever want a primitive to be regular-expression-ish. And vast majority of current and past TC39 members seem to hate this entire protocol anyway. So less usage of it sounds good. If I have any polyfills that need to be updated for this, I’m enthusiastic to do so. + +GCL: All right. Seems like nobody else has much to say. I guess I will ask—oh, plus 1 with no comments from OMT says it makes the implementation easier. Did you want to say anymore more? + +OMT: No. + +RPR: And Dan Minor, did you want to speak? + +DLM: Sure. We talked about this. It seems fine. I guess there’s a small, small chance of some compat problem but doesn’t seem likely. + +SYG: Also seems good. Any thoughts on in the small but nonzero chance it is not compatible to do next? + +GCL: If it’s not compatible, we would just not do it, I guess.? + +JHD: Alternatively if not compatible because some website is defending on one specific kind of primitive that it’s making RegExp for some crazy reason. If that’s the case, we could also adapt this to allow that to one kind of primitive to still be checked. But not the others. + +GCL: Maybe. I think that would sort of defeat the purpose of the change in the first place. + +JHD: Fair enough. + +GCL: But, yeah, I don’t expect this to be web incompatible just due to how niche it is. + +KG: I didn’t want to mess that up there. I’m in favor of this. I’m pretty sure when we did the disposable protocol, we did the same thing. We said that the disposed symbol is not looked up on primitives, only on opt-outs. And I just want to call out that in the rare occasion that we are going to be introducing new protocols, I think we should follow this sort of precedent and just sort of always omit primitives from protocols in the future from symbol-based protocols. + +RPR: Shall we call by consensus? + +GCL: Yes, do we have consensus? + +RPR: There are no objections. Then congratulations, you have consensus. + +GCL: Thank you everybody. + +SYG: Sorry to interject, I didn’t have time to type this into the queue. So I want to double check since Test262 sometimes fall through the cracks for normative PRs, I want to make double sure that GCL or whoever else is signed up to write these tests. + +GCL: Yeah, we will take care of that. + +SYG: Great, thanks. + +### Speaker's Summary of Key Points + +* Node.js/Deno write a large portion of their implementation in JavaScript, and so aim to ensure this implementation is hardened against user code. +* `String#{match,matchAll,replace,replaceAll,search,split}` will no longer look up the protocol symbol when called with primitives, rather than just undefined/null. +* Expected to be web compatible due to core-js never shipping the spec’d behavior + +### Conclusion + +* Consensus +* Deno will write tests + +## Float16Array for stage 4 + +Presenter: Kevin Gibbons (KG) + +* [proposal](https://github.com/tc39/proposal-float16array) +* [spec PR](https://github.com/tc39/ecma262/pull/3532) +* no slides + +KG: `Float16Array` and `Math.f16round` and the data methods for reading and setting `Float16` values. This proposal has been at Stage 3 for a while. Implementations were, of course, several orders of magnitude more difficult than the specification. Specification is very simple and basically just copies the existing float 32 array spec and says binary 16 instead of binary 32 float everywhere. The implementations have to do what a lot of the work for each platform at least when they are trying to optimize this. But they all done that to the extent that they are comfortable shipping at this point. So JavaScriptCore that is Safari and SpiderMonkey that is to say Firefox are both shipping already. Chrome I believe made the decision to start shipping in March, I believe, is when the version—this will be on by default. There is an open pull request for specification. There are of course tests which were prerequisite for Stage 3. + +KG: This is also starting to be adopted by other web specs which was the intention. The canvas people are starting to work on having higher colour-depth canvasses that will be backed by or at least make use of Float16Arrays and I know also the WebNN spec is interested and possibly making use of Float16Array and neural nets make more sense than float16 than float32. I believe it should meet all of the criteria for Stage 4. + +KG: Especially because this is a proposal that requires more from implementations than most proposals. This isn’t just syntax, you are getting in there and writing assembly. I want to make sure there is no concern of implementation before going forward. But I believe it’s ready for Stage 4. + +DLM: Thank you. SpiderMonkey team supports this for Stage 4. + +SYG: Sounds good to me. I do confirm that the plan is to turn it on by default in Chrome 135 that should—let me bring up the calendar here that should hit stable first of April. + +RPR: There’s a comment. LGH Implemented this in one of the smaller engines and plus one for Stage 4. + +KG: I would like to formally call for consensus. We will had plus ones and give everyone an opportunity to object. + +RPR: Congratulations. You have Stage 4. + +### Speaker's Summary of Key Points + +* Spec is simple, implementations hard +* Implemented and shipping or almost shipping in all three major browsers +* Ongoing web API usages in progress in Canvas and WebNN + +### Conclusion + +Stage 4 + +## Redeclarable global eval vars for stage 4 + +Presenter: Shu-yu Guo (SYG) + +* [proposal](https://github.com/tc39/proposal-redeclarable-global-eval-vars) +* [spec PR](https://github.com/tc39/ecma262/pull/3226) +* no slides + +SYG: Great. Thanks. Before I go into it, do people care to hear a recap of what this is about? + +RPR: Won’t hurt. Just quick, brief. + +SYG: Very well. So this was originally needs consensus PR to fix the corner case in dealing with vars at the global scope. The global scope is to say the least very strange because among other things it is an open scope meaning that if you have a script tag and you introduce something and then you have another script tag and you mutate the global scope, multiple script tags don’t get their own global scopes. They get the same global scopes. It’s always open. It’s never closed unlike the function scope where, you know, the scope doesn’t extend beyond these two braces. So without getting too much into the weeds here, the upshot is basically there is a—in the current spec, there is a special mechanism on the global scope with the slot called `[[VarNames]]` to specifically track global bindings via the `var` keyword. This is a slight pain in the ass for implementations and basically boiled down to the extra bit on the property descriptor for everything on the global object, only on the global object. + +SYG: I proposed we get rid of the special case and basically treat `var`s as we treat other non-configurable global properties. If you don’t know the weird corner of JS, `var`s on the global area are not just a binding that you must refer to with the bare identifier. They can show up on the global as a property. We have a special case for those properties on the global object that were introduced via `[[VarNames]]`. If we get rid of the specific case, it eases the implementation burden, it gets rid of a weird corner in my opinion but it is normative in that it changes behavior. + +SYG: And I think the main consequence is basically this: This change allows you to write this snippet basically if you have a `var` X and introduce via eval direct eval on the global text this is a global property. Currently in the spec because var names are specially tracked on the global object, if you try to have the same name lexical binding also on the global scope this will currently be an error. That is what this `[[VarNames]]` slot was for. I tried to argue this is really not a use case anybody cares about to error in that way and to get some simplicity we should just allow the shadowing basically. And so this is currently disallowed. But it will become allowed. Nevertheless, don’t do this. I don’t know why you would do this. So just don’t do it. + +SYG: So that is the actual change. And the status is that we have all shipped it basically. This was the existing behavior in Chrome. Nobody really complained. Safari has implemented this. This was brought to my attention first from a Test262 test I think by Safari engineers, thank you very much for that. Firefox has shipped as well or maybe not yet shipped but—I guessed shipped by this point February 4th. And this is not checked off, but they do have editorial reviews for the actual PR. So with that, I’ll go to the queue before asking for Stage 4. + +DLM: We support this as well. + +RPR: Anyone else on the queue? All messages of support? Or objections? I think that’s about it. SYG You can ask for consensus. + +SYG: Yes. Could I please get Stage 4? + +RPR: KM is plus one. There are no objections, so congratulations you have Stage 4. + +SYG: All right, thank you. + +### Speaker's Summary of Key Points + +* Recapped the existing spec behavior (global vars conflict with global lexical bindings) and the proposed change (global lexical bindings allowed to shadow) +* All 3 browser engines have shipped the proposed behavior + +### Conclusion + +* Stage 4 + +## RegExp Escaping for stage 4 + +Presenter: Jordan Harband (JHD) + +* [proposal](https://github.com/tc39/proposal-regex-escaping) +* [spec PR](https://github.com/tc39/ecma262/pull/3382) +* no slides + +JHD: `RegExp.escape`. Here is the spec. Somewhere in here is approved spec PR and we have a bunch of implementations. Firefox has shipped it and Safari shipped it and two polyfills. I believe implemented in Chrome but not released it. SYG can probably confirm that. + +SYG: Do you want me to confirm right now? + +JHD: Whenever. And then, yeah, it’s met all the various requirements for Stage 4. So I guess SYG wanted to add your context. + +SYG: I think this one, my bad, kind of fell through the cracks. This is implemented and staged now. And should be ready to go in 135 or 136. Either April 1st or April 1st plus four weeks. + +JHD: So although certainly preferred it to have landed in Chrome first, I’m not worried about web compatibility risks here. Given there’s two of the three browsers shipped it, I would like to ask for Stage 4. + +RPR: You have support from DLM. No objections. Plus one from DE and no objections. Congratulations, you have Stage 4. + +JHD: Thank you. + +### Speaker's Summary of Key Points + +2 browsers, 2 polyfills, 3rd browser implemented and will ship in April All criteria met + +### Conclusion + +* stage 4 + +## import defer for Stage 3 + +Presenter: Nicolo Ribaudo (NRO) + +* [proposal](https://github.com/tc39/proposal-defer-import-eval/) +* [slides](https://docs.google.com/presentation/d/1LjsJhdTIP3wgo1odtVa-qbfyGU5M1W9YMm0AtKnJJKk/) + +NRO: There have been no changes since last meeting and normative changes. A few tweaks following the editorial reviews. We have test coverage and all Test262 tests have been merged. We have thanks to a colleague of mine WebKit implementation passing the tests, so at least we know that the tests are not wrong. There are failures if you look at—that’s due to WebKit problems and not to the test. They’re known bugs and they’re in the process of being fixed. + +NRO: We have implementations in tools: Babel and prettier already supported and work in progress TypeScript implementation. If anyone wants to help, an Acorn plugin would be welcome. It unlocks syntax support for webpack and rollup and a bunch of others. That’s it. Just before consensus, I want to ask the editors if we have their blessing. We talked about how to go and got official approval of GitHub from part of the group but not from all of it. + +KG: Yeah. + +NRO: Thank you. Then do we have consensus for Stage 3? + +DLM: We support this. We’re quite interested in being able to use this in our internal code, so thank you. + +NRO: Any objections? + +DE: CDA is plus one on the queue. + +CDA: I don’t need to speak but support Stage 3. + +NRO: Thank you Chris. I think we have consensus. The plan now—the next steps is to—I will open the request in the 262 repository and just waiting for the import data request first and that’s it. Thank you everybody. + +CDA: Just noting for the record that DE also supports stage 3. + +### Speaker's Summary of Key Points + +* No normative changes since last meeting, only some editorial tweaks +* All tests262 tests have been merged, see https://github.com/tc39/test262/issues/4215 +* Wip WebKit implementation to validate the tests +* Tools implementation in progress, would appreciate help with an acorn plugin for the proposal + +### Conclusion + +* Consensus for Stage 3 + +## Explicit Resource Needs Consensus PR + +Presenter: Ron Buckton (RBN) + +* [PR](https://github.com/rbuckton/ecma262/pull/13) +* [proposal](https://github.com/tc39/proposal-explicit-resource-management) +* no slides + +RBN: So the only thing that I wanted to discuss today here is that there was an issue that was posted for explicit resource management that the spec text was currently missing the definitions for the constructor prop on A sync disposable stack and there is PR for the Ecma script specification that defines those as intended to be defined. I expect this is proforma of something not intentionally excluded. So I’d just like to ask for consensus for this change. + +RPR: Just pulling up the queue. So SYG. + +SYG: I support this. I think it’s clearly a spec bug. + +RPR: Thank you. Kevin is also plus one with end of message. Michael is also plus one. + +RBN: I’ll wait and see if there’s any objections. + +RBN: Thank you very much. + +### Speaker's Summary of Key Points + +* PR addresses missing definitions for the `constructor` property on `DisposableStack.prototype` and `AsyncDisposableStack.prototype` + +### Conclusion + +* Consensus reached + +## Temporal normative PR and status update + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-temporal) +* [slides](http://ptomato.name/talks/tc39-2025-02/#8) +* [PR](https://github.com/tc39/proposal-temporal/pull/3054) + +PFC: My name is Philip. I work for Igalia and presenting this work in partnership with Bloomberg. I’m sure that those of you who are returning have seen many Temporal presentations. This one should be quick. Progress update is that we are continuing to get closer and closer to the required two full implementations, close to done, and we've been cleaning up the issue tracker. In the meantime several requests for editorial changes have come in from implementations which we have incorporated. In time we’ll continue to analyze code coverage metrics to make sure that we have complete Test262 coverage for gaps that we might have missed and to answer any questions raised. There’s a lot of questions coming in now because the Mozilla Hacks blog did an article with Temporal being switched on by default in Firefox Nightly. There’s a surge in interest in Temporal and we are getting a lot of questions from people who would like to use the proposal. This is good. It’s fun to see all the questions coming in. Please do go ahead with your implementations and ship them unflagged when they’re ready. If something is preventing you from doing that, please let us know as soon as possible. + +PFC: The proposal champions meeting is biweekly on Thursdays at 8:00 pacific time and it is open if you want to join, please join. If you want to talk to us but can’t make it at that time, we can find another time to meet. + +PFC: So I mentioned it’s shipped in Firefox Nightly. This is quite exciting for us. It means that people are using it in the wild. There is now full documentation for the proposal on MDN, this is a long time coming. I think we started it [the documentation] three or four years ago. But it is now there. There’s a compatibility table that I hope will get updated as implementations near completeness. + +PFC: I do one of these graphs every time. Apparently people like them. So I do want to be clear that the percentage of test conformance does not mean percent done, just to say that upfront. But SpiderMonkey is close to 100% conformance. A handful of tests are not passing yet. But less than half a percent. Ladybird, previously known as the SerenityOS engine (LibJS) is quite an improvement since last time and at 97%. And GraalJS is up there as well. And V8 and Boa and JavaScriptCore are lagging a bit. I got word from one of the maintainers of Boa that they actually increased a couple of percentage points since I made this graph. But I didn’t have time to go through and retest everything. So this is as of a couple of weeks ago. And JavaScriptCore I’m happy to say that one of my coworkers from Igalia, Tim Chevalier, is looking to land additional patches for JavaScriptCore to get the percentage up. Keep an eye on this space. And hopefully next time, “number go up”. + +PFC: We have one bug fix to present this time that requires consensus. So this change was requested by André (ABL) who is working on the Firefox implementation. The ISO 8601 calendar is a standardized machine calendar and remains unchanged arbitrarily far into the future. We don’t support dates that are outside the range of what JavaScript Date supports. However, you can create a `Temporal.PlainMonthDay` from a string that is outside of that range: the year can just be ignored. PlainMonthDay objects are—in the first line of this code example here, you can see, you get a PlainMonthDay of January 1st even though the year is out of range. However, for human calendars that are not ISO 8601, this places a burden on the implementation that is unreasonable because you have to be able to find out what the date is in the human calendar for the date in the ISO calendar. For example, for the Chinese calendar which has lunar years, a function call like this would require the implementation to calculate a million lunar years into the future. That is well outside the date range and the answer would be nonsensical anyway because lunar calculations are not that exact that far in the future. + +PFC: We propose to continue allowing this for the machine-defined ISO 8601 calendar, but throw the RangeError in the case of any other human calendar. So I would like to ask for consensus for that normative change to the proposal. And I’ll also handle any questions at this time. + +SYG: I wanted to confirm on the percentage of test passing slide Boa is the same as temporal-rs? + +PFC: I think I would say that temporal-rs is the library that Boa uses. + +SYG: What I mean if you add another Y axis for temporal-rs is that the same number as boa? + +PFC: I would say that doesn’t apply. Temporal-rs is not JavaScript so can’t run Test262 tests. + +SYG: I think you know what I’m getting at. Sounds like temporal-rs is the same as Boa. + +PFC: Yes. I don’t know enough about the connection between the two to say if V8 were to incorporate temporal-rs I don’t know if the percentages would go in lock step. I don’t know enough about the connection between the two. + +LCA: I think there’s a significant amount of code that sits in-between temporal-rs and the engine that does JavaScript values into web objects. I think all of the tests related to that would not be captured by this comparison. + +SYG: It makes sense. + +LCA: It underlying—like, operations may be correct but there’s still a lot of variance in those transforms. + +SYG: Got it, thanks. + +DLM: Sorry, just wanted to express support for the normative PR and not surprising since we requested it. Thank you. + +MLS: So I like to throw a RangeError, what is the algorithm for completing that? Do you take the human readable and see that you have something in the data that can resolve to or how is it computed? + +PFC: I will just put it up on the screen. So the change is that we treat the ISO 8601 calendar separately and if you get to this point, it’s a human calendar. And then we check the date that you gave in the string that is in the ISO calendar. And if that’s within the limits that we accept for any other Temporal object like plain date, it’s fine. If it’s outside of those limits, we throw a RangeError. It’s 10^8 days before or after the 1970 epoch. + +SFC: I was just wondering if you could reiterate why the normative PR special cases the ISO 8601 calendar is doing the behavior across the board including in-line one. + +PFC: Because the ISO calendar is fully specified. It will change maybe the case that, for example, the Gregorian calendar adds an extra day to account for planetary rotation speed a thousand years in the future, I don’t know. + +SFC: I agree that the first can be implemented. Wondering why it seems that it is consistent but it’s not wrong. I think it is the right call to do it consistently for all the non-ISO 8601 calendars but this is just a case where it’s not clear to me why there’s a difference in behavior. I mean, I agree there can be a difference in behavior. But I’m not sure why that was so. Was this a proposal to like make the changes as minimal as possible? + +PFC: I don’t remember off the top of my head why we decided to make that exception. I assume it was making the changes as minimal as possible. + +DE: I’m very happy to see multiple implementations and this proposal being complete in its definition, modulo a bunch of very minor bugs that are being discovered. I’m wondering is Firefox planning on shipping this beyond nightly soon? This is a question for Daniel. + +DLM: Yeah, sure. So just to clarify, so the current stage is that it’s built in nightly but disabled behind a pref. A couple days ago I landed the change to flip that and now it’s enabled on nightly. If that goes well, we hope to ship it. Hopefully sooner than that, but it might be a few months. + +DE: That’s great, thanks. + +CDA: That’s it for the queue. + +PFC: Sounds like there are no objections to consensus on the change and no more questions. Thank you. + +RPR: Thank you Phillip. Do you want to do a summary of what was discussed? + +PFC: I have a proposed summary up here that I will paste into the notes and add any points that were discussed. + +RPR: Thank you. You’re very well prepared to make sure we have excellent notes. All right. Let’s move to your next topic. Status update on ShadowRealm. + +### Speaker's summary of key points + +With Firefox Nightly shipping the proposal and MDN adding documentation for it, there is a surge of interest in Temporal. + +Implementations should complete work on the proposal and ship it, and let the champions know ASAP if anything is blocking or complicating that. You are welcome to join the champions meetings. + +A normative change was adopted, to avoid requiring questionable calculations when creating PlainMonthDays in non-ISO calendars outside the supported PlainDate range (PR [#3054](https://github.com/tc39/proposal-temporal/pull/3054)). + +## ShadowRealm Status Update + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-shadowrealm) +* [slides](http://ptomato.name/talks/tc39-2025-02/#1) + +PFC: This work was in partnership with Salesforce. Expecting that the meeting would be full, I kept the recap of what is ShadowRealm very short. So if you want to know more or know more about the use cases, please come talk to me later or maybe if there’s time on Thursday and folks would like it, I could prepare a short presentation on that. If you’re interested, ask me. So the short recap. + +PFC: ShadowRealm is a mechanism by which you can execute JavaScript code within the context of a new global object or new set of built-ins. The goal is integrity. That means complete control over the execution environment and making sure that nothing else can overwrite your global properties or define things that you don’t expect. There’s a whole taxonomy of security, and so that’s why I don’t like to say the goal is security, because it can mean a number of different things. So that’s why the goal is integrity. This is what ChatGPT thinks the inside of the ShadowRealm looks like. Mysterious and intimidating figures embodying the realm's eerie essence. + +PFC: I talked about ShadowRealm previously in the December meeting. The big question at the time was which Web APIs are present inside ShadowRealm? I’m happy to say the W3C TAG adopted [a new design principle](https://www.w3.org/TR/design-principles/#expose-everywhere) that new API should be exposed everywhere. This includes other environments, not just ShadowRealm but environments like AudioWorklets and ServiceWorkers and such things. Those were previously all enumerated manually with an annotation in WebIDL, and now you can say this API is so fundamental and should be exposed everywhere there’s JavaScript. These are APIs like TextEncoder that maybe could have been part of the JavaScript standard library but aren’t. If you are very curious, I have a spreadsheet with the full list of over 1300 global properties on various global scopes which ones are exposed everywhere and which ones are not exposed everywhere and why. You can follow the link in the slides. + +PFC: Some of the things I said I would follow up from last meeting, KG asked about `crypto.subtle`. Initially I had a pull request to have `crypto.subtle` exposed everywhere and looked like it would succeed. And I found out from the crypto maintainers the way it is specified depends on an event loop. And one of the design principles is things that depend on the event loop can’t be exposed everywhere because not all environments have an event loop. I think we’ll leave it out for now. Hopefully they will be able to redefine it, in case you’re hoping, to not depend on the event loop and at that point could be exposed everywhere. I had a whole list of web platform tests for the web APIs present inside of ShadowRealm. Some of those still need reviews. If you are in implementation and interested in looking at those, I would much appreciate that. + +PFC: What we’re working on now, so last time in December we had a discussion about getting buy-in from browser’s DOM teams, and how we might be ready to go to Stage 3 in the proposal in TC39, but it shouldn’t happen without the buy-in. So we had some questions about use cases from that area. So we would like to shore up how convincing the use cases are. You know, we want to show that as TC39, we are excited about this and glad that it has HTML integration and that would be useful for end users of the web. So if you have a use case for ShadowRealm that you don’t mind sharing, please come talk to me in the hallway sometime during this meeting, I would be really interested to hear it. And if you’re okay with it, I will try to write something up that kind of expresses how this benefits your end users. So please come talk to me. That’s it for now. Any questions? + +RPR: There’s nothing on the queue. + +KG: Sorry. For `crypto.subtle`, does the fact it is not included on the basis of using the event loop mean no async APIs inside of ShadowRealms? + +PFC: It doesn’t mean that. Most async APIs are defined in a way that they don’t require the event loop. We don’t have the event loop in ECMA262 but `Promise.resolve` still works. + +KG: `Promise.resolve` is only sort of an async API. Most of the stuff is punted to the host and most host and queue or whatever and tells the host to get back to it. + +PFC: I would say this is a problem that most async APIs don’t have, but they defined it in this way in the web crypto spec and apparently it is observable, and they could change that before they say it doesn’t depend on the event loop. + +KG: I would be surprised to learn it is observable. But async tasks just get completed in various points in the future. And theoretically they can take any length of time. It would be nice to see that it observable depends on the event loop in a way that is succinct from any other API and I would be concerned we can never have any async APIs. Maybe that would be okay but it’s a little worrying. If it is just the details of how the crypto spec is written, yeah, okay we can make try to fix up the crypto spec although it is largely unmaintained. I don’t know if that is going to happen. It had been getting more. Maybe we’ll get there. + +PFC: I don’t think other async APIs have the problem. I think it’s the detail the way that the crypto spec is written. + +KG: Okay. + +DE: I’m trying to understand is there any particular choice of that including or excluding APIs that you disagree with? Is it just about crypto or – + +KG: I think crypto is the main one that I would like to see included in the sense of it is generically useful. If there was—I appreciate the reasoning for not doing so. We talked about a couple of others at the last meeting and most of the things—like, I can imagine wanting to use the Web Codecs, for example, that in some case are purely computational and makes sense that you want to say no. This will probably involve hardware that we don’t necessarily want to invoke, and in the ShadowRealm that makes sense. I would generally personally be very permissive what purely computational means and could in principle be implemented in JavaScript or WebAssembly for example, I would put it here. And that would include all of the crypto APIs and all of the media codecs and everything. I understand why we’re not doing those. I don’t want to continue pushing on this. My hope is that we can get crypto specifically included in the future, because it is extremely generally useful. + +KM: Some feedback from people and talking to people about this and I don’t have time to write anything up formally but give the feedback here. Yeah, I think the use cases did seem like it was a thing that we were push back from talking to people on. It seemed like from just talking with—I think I mostly just talked with the bun folks but didn’t seem they weren’t super big on it. They weren’t super in need of it or use it. And seemed like the question I always got is sort of like why can’t you do this with the iframe and have some part of like a tool that just automatically collects all the IDLs and scripts out all the names that you don’t want from the Iframe. And then also seemed like it was another feedback I got from people is it was kind of like—it was a lot of ongoing work throughout the web platform like in terms of everybody who is writing any web specs needs to consider this and so that seemed like it was kind of pretty cross-cutting and ongoing spec/maintenance work that really want to see the use cases for that before we want to commit to that basically. + +PFC: Okay. The use cases I can hope to have a larger presentation on that soon, like I said. The first question about why can’t you use an iframe? So if you use a sandboxed iframe only asynchronous communication is possible. So you cannot emulate the convenient synchronous communication between main realm and ShadowRealm in that way. If you use the non-sandboxed iframe, you can’t go in and delete any property you don’t want because `window.top` is unforgeable and you will always have free communication with the main realm. + +?: Thanks for the feedback on the last one there. + +JSL: Just pointing out on some of the async operations and web crypto right now, there’s streaming support being added of async iterators are operators like `digest.node` that might make it difficult to eliminate event loop. We’ll see how that evolves. Something to be aware of. + +PFC: Thanks. + +LCA: I have a response to that. I don’t see it all how this is different from like ReadableStream, for example, like the fact that `ReadableStream` is exposed but `crypto.subtle` is not. + +JSL: It should be fine. But something to be aware of. + +RPR: We’re at the end of the queue. + +PFC: That’s it. + +DE: I just want to underscore what PFC and KM already said, use cases are very important. Implementations have already been made. The only reason they’re not shipping is for lack of use cases. For a long time, the lack of web implementation was the blocker. And now it’s purely lack of use cases. So anyone in committee who wants to use ShadowRealms, please please communicate the use cases. + +SYG: I wouldn’t say implementations are already done for the HTML integration part. It is true that I think implementations are already done for the pure JS part. But for Chrome, it’s not the case that all the APIs that we want in this—suppose the list that PFC has is the final list. I don’t think it’s true that that work is done. + +DE: Apologies, thanks for the correction. The thing that’s blocking the implementation work there is the use cases, right? Previously there was – + +SYG: Exactly right. + +DE: Previously was the HTML design work that Phillip has done a good job completing. + +SYG: I think as part of asking for that work, the feedback has been—I want to echo what KM was saying. Use cases are important. It has shown to be much more cross-cutting than I thought in terms of the maintenance cost, so, yeah, the use cases kind of weighed against the maintenance cost is the deciding factor here. + +DE: In particular, it’s the maintenance cost on supporting the web APIs on the ShadowRealm global. + +SYG: It’s the cognitive burden that every API current and future has to consider ShadowRealm as a new kind of global. Is that what you meant? + +DE: Sure, maybe. + +MAH: I’m confused a little bit by the request for use cases, because my understanding is the champions and others have expressed use cases building libraries for executing coding in virtual environment that in other—those use cases have been expressed. How is that not sufficient? + +PFC: I mentioned before particularly WHATWG environment likes to see use cases developed how does it benefit the end user of the web? I think you’re absolutely right that use cases such as running code in the virtualized environment have been expressed. I think we need to kind of step up how we communicate that to these other groups and express it more in terms of what benefit is there for the end user? + +MAH: The benefit for the end user, which end user? The users using the libraries and users using actually those libraries or developers using directly the APIs? Do we ask now that an API being added needs to be targeted towards the mass audience of developers, or is it okay to have some APIs that are only useful for few developers that will build the libraries that will be ultimately used by other developers? + +PFC: I mean, I’m interpreting it to mean, what can you build for end users of apps that would use ShadowRealm internally that you couldn’t build without ShadowRealm? I think that’s a reasonable question to ask and I'll try to answer it as well as possible. + +RPR: I want to point out we’re one to two minutes for lunch. We should have a hard stop. Other people in the queue to get to? + +KM: I think the key thing here in some ways for us is if it was a one off thing that we just did this once, it would be probably an easier pill to swallow. The concerns that I got it’s like everyone designing any spec going forward needs to consider this. So it’s like just an extra little bit for everything going forward for developers that don’t like—maybe somewhat new to the web platform writing specs for whatever. They’re an expert in some other area and need to understand another bit of intricacies of the web platform when exposing their APIs. That was kind of the feedback that I got, not quite as much about the current ones as it is about future work ongoing forever. + +RPR: Thanks. I think we should—there is spare capacity for a following item that you wish to continue this on Thursday. Have a think about that. For the note takers on that. Can we capture the queue. Phillip do you want to say a summary of where we got to? + +| Speaker Queue | +|:--------------------------------------------------------------------------------------------------------------------------------------:| +| Users = web browser users; why ShadowRealms is a bit special Shu-yu Guo (@google) | +| Consider topic (on Thursday?) going into details on why the Salesforce/Agoric use cases aren't persuasive Daniel Ehrenberg (Bloomberg) | + +PFC: Sure. I guess presented this status update on ShadowRealm proposal where we are primarily focused on describing use cases in terms of end users of the web and we would be happy to hear your use cases if you have them and we’ll come back in a future meeting with another update. + +RPR: Thank you Phillip. All right. That brings us to the break, to lunch. I will note because we pulled certain things forward, that means the afternoon schedule has become rearranged. So please do check that out. There are items there. We will resume at 1:00 p.m. and lunch is happening. We have sandwiches over there. Check if anything more? I think we’re good. And then also likewise if anyone has any feedback or physical temperature feedback, please let me or Michael know. Please enjoy your lunch. + +### Speaker's summary of key points + +We presented a status update on the ShadowRealm proposal. We are primarily focused on describing use cases in terms of end users of the web. We'd be happy to hear your use cases if you have them and we’ll come back in a future meeting with another update. + +We discussed the designation of `crypto.subtle` as not exposed everywhere, and whether it could be exposed everywhere in the future, and what it means for use cases to be described in terms of end users. + +## Decorators implementation updates + +Presenter: Kristen Maevyn Hewell Garrett (KHG) + +* [proposal](https://github.com/tc39/proposal-decorators) +* [slides](https://slides.com/pzuraq/decorators-for-stage-3-2022-03-977778) + +KHG: So, yeah, quick update on decorators implementation. Everybody’s favorite proposal back again. Okay, before we get started, well, basically just wanted to give a quick overview of, like, a refresh of what decorators are about and talk about the status of the implementation and some of the things that have come up. + +KHG: So refresher, decorators are functions that have forming capabilities when applied to classes or class laments, and that is replacement, so being able to replace the value that is being decorated with one that is similar, so the same general shape, replace a method with a method, a class with a class, an accessor with an accessor. + +KHG: So the second capability is initialization. That’s being able to initialize the value with per instance with a different potential value, so with methods, you can do things like bind methods with, you know, accessors and class fields or auto accessors in class fields. You can assign the to fault value or intercept the default value and so on. And then next is metadata, so being able to associate some extra information, for instance, type information or serialization information, with the value. And lastly, access. Access is being able to do things like get and set the value out of bounds, so you can do that with private values, with public values, and that can be a way to, for instance, add, like, a serialization layer that can do things like access private values or, you know, test helper methods and what not or, like friend methods that can do that in some way. + +KHG: And some common use cases for these are things like validation libraries and dynamic type systems, being able to, you know, annotate things and say this is a string or this is a number, and having that actually work at run time, not just at compile time. ORMs declarative data structures Ike serializers, moods and what not, reactivity libraries like mob ex, like I mentioned before, method binding, that’s a very common one. Debugging tools, like things like being able to add a deprecated decorate that are will log when a value is used and it’s meant to be deprecated or, you know, being able to log whatever function is called or send an event or what not. And dependency injection, so if you need to annotate a class to say here is the things I need. + +KHG: And then real quick, because this comes up a lot, it’s why are we starting with classes? Because function decorators are also something. They’re not part of this proposal, but they’re something people have wanted a lot and arguably would be simpler to implement. It would be a smaller spec and all that. And why do we need these at all? + +KHG: So first often, when it comes to function decorators, today it is possible to use a decorator pattern without using syntax for functions. You can create a function that receives a function and returns a decorated function, and it’s very declarative, it’s easy to understand, it’s performance overall. There’s really not much downside with the exception of the name here, memoizedFunc, would not get applied to this function. If you’re trying to debug it, it gets a little annoying. But that’s the only real issue with function decorators at the moment. + +KHG: When it comes to classes, we don’t really have that same capability. So, for instance, if you wanted to create a memoized method, this would create a new method, an enclosure per instance of the class, and that might not be what you want. You might want to decorate the prototype. To decorate the prototype, you would have to do that either using a static block or imperatively after the class definition, and this is where it can get really complicated. I think one of the main benefit of classes over prototypes was the fact they’re a lot more predictable. I used to see code before class syntax that would do things like conditionally add a method to a prototype or something. And sometimes maybe that makes some sense, like, if you want to debug only method or a debug version of a method, but in general, it was very confusing and hard to read. So decorators really simplify this whole thing and make it a lot easier overall and more idiomatic and what not. + +KHG: So, yeah, community interest also remains really high. It is the second most anticipated feature in the 2024 State of JS survey. Anecdotally we received tons of feedback that it’s looking really good and people are really enjoying and it using it well. It’s one of the most widely used syntax additions overall. And, yeah, I think it’s very much anticipated. + +KHG: And then implementation status. So we have shipped transforms in TypeScript and Babel, and those have been widely adopted by the community, with some exceptions for people who are waiting on metadata or on parameter decorators, because that was something that the older legacy TypeScript decorators had as well. Tests have been written for test262. I have not been able to merge them, get them merged, because I have been very, very busy with job things. So -- but the tests themselves are comprehensive. They cover every edge cases and corner cases, and that we have found so far at least. And I do think all that they really need is a rebase and they’re good to go. And then Edge is currently nearing completion with the implementation in V8, SpiderMonkey is around 75% complete, and we have another -- a number of proposals that are awaiting completion to move forward. They’re kind of in a holding pattern, parameter decorators, function decorate and grouped accessors being some of them. + +KHG: And, yeah, what we have heard so far as we’re kind of approaching completion is several implementers have been expressing some hesitation to being the first to ship decorators, and so it’s kind of a -- a little bit of a standstill at the moment, and we wanted to take some time to discuss those concerns at plenary and see, yeah, just dig in a little bit. So that’s pretty much where things are at. Yeah. + +NRO: So there are multiple implementations of decorators, like, as KHG mentioned, there’s one in Babel and one, like, Edge team (?). The problem is we don’t really have tests, at least we’re not running tests because they’re not merged yet. So if, like, I’m going to see whether or not we’re going to try to do it for Babel, and please also native implementations do it. Do run the tests in the request. I know for maintainers it’s huge to review PRs for the maintainers, so, please, we can catch potential problems in the tests by running them and see what’s failing in our implementations. + +USA: Next on the queue we have DE. Oh, sorry, there’s a reply by PFC. + +PFC: As far as the test262 PR goes, I think the only thing blocking it right now from being reviewed is some of the generated files are missing their corresponding source files. If you have time to add those, then we can -- like, what we’ve been doing with large PRs is splitting them up, so we can try to do that with this, and hopefully merge them into the main tree a bit faster. + +DE: It was mentioned in the presentation that there’s, I guess, a complete implementation in V8 out for review, and partial implementation in SpiderMonkey behind a flag. Can we discuss those more? Like, could we hear from the Edge team what the -- what your implementation status is, where that is. + +LFP: There is implementation that we submitted to Chromium, and it’s currently waiting for review. + +SKI: Yes. We have been implementing it as Luis said, and while we generally are in sync with upstream v8 team about features we’re implementing, we are currently waiting for review of this work. We want to resolve issues that Kristen raised in this plenary, like, in an open discussion in TC39 to understand the concerns of all the other engines and other stakeholders for the decorators proposal. + +DE: Okay, great do, we have anybody here from these engines who could speak to those concerns? Shu, are you on the call, DLM? + +SYG: I’m here. What would you like to -- sorry, what was the question? + +DE: So are you considering reviewing the patches that the Edge folks made for decorators? If not, is there a reason why not? + +SYG: It’s currently not prioritized. We also have reluctance to be the first movers to ship decorators here. + +DE: Why is it not prioritized? + +SYG: Because we would like to not be the first to ship it. + +DE: Okay. DLM, do you have any thoughts on this? + +DLM: Sure, I can provide a bit of an update. So I was working on decorators up until about a year and a half ago or so. At that time I stopped my work because I had higher priority things to work on and, yeah, it just hasn’t become a priority for us again since then. So our implementation is paused for now. + +DE: Is there anything that either of you could say about how you determine the priority of these things? + +USA: There is a reply by ML on the queue. + +MLS: Yeah, so we’re -- I think we’re in a similar boat. We don’t -- A, we sort of don’t want to be the first to ship this, and B, we don’t view it as a high priority given other priorities we have, having to deal with performance security and other features we’re implementing. It’s a large feature to implement, and it will take a good amount of time, I would think, to do it. + +DE: So maybe we can discuss how browsers prioritize features so we can understand why other things were prioritized and this one wasn’t. I mean, overall, it would be really useful to get input from browsers on how we in TC39 should prioritize our work so that we’re aligned with, you know, what will make sense for browsers’ priorities. + +SKI: So, yeah, as KHG shared, decorators is a popular feature among developers. The bug for the implementation of decorators has about 78 votes, and we were wondering if any data on the ground, like, any surveys or implementation and usage experience would help. Is there any data that could be collected that would help, you know, align decisions, like, increasing the priority for implementation -- I mean, how do we get out of this deadlock? + +DE: Can I request–even if implementers, even if the three browsers don’t have anything to say now, maybe you could come back at a future meeting and give us more clarity on how you determined the prioritization, what data you might find interesting, whether you’d like the proposal to be withdrawn. It’s just very hard to interpret the signals. It would be really helpful and productive for this committee if we could get more clarity from the three browsers. + +KHG: Yeah, it also, just to climb in, I haven’t had a lot of time the dedicate to this since I left LinkedIn several years ago, and I’ve been, like, putting in spare hours where I can find them to keep everything updated as much as I can. But, you know, I think that lack of clarity has been really hard to deal with, because it’s -- it feels kind of arbitrary and also it feels like a really high bar to say that, you know, we have to not be the first one to ship a feature. That can just turn into, like, you know, a never ending stalemate. And it’s not like we’re saying, you know, you are have to also implement the feature, because that’s -- the implementation is already there. It’s just shipping it. So it really -- yeah, I guess just I put five years of my life into this now, and on and off, obviously, but I’d really like to see it get over the line. Yeah. + +MLS: So response to DE, I’m not at liberty to talk about how we set our priorities. There’s all kinds of things to figure into that. Certainly what’s being standardized, we have performance, we have security mitigations, we have thing that are coming down our hardware pipeline that we need to do development for, so, yeah, I can’t -- you know, I can’t tell you what our priority is for certain things, and, you know, you have end things and you have to draw a cut line some place based on the priority of the current development cycle. + +SFC: Yeah, I mean, you know, when this body advances, the ones we advance are largely the one my team determined are important to our uses, our clients at Google, and we also put in the work. And my team has been putting a lot of time into Temporal proposal because that’s important to our users, which are, you know, users of internationalization libraries, users, you know -- developers trying to build internationalized apps. And, like, that’s how that happens, at least for proposals. I can’t speak for other proposals that I’m not familiar with the users and the clients. But, like, that’s probably a good place to -- I just want to sort of draw that problem and be like, Intl proposals tend to get implemented pretty quickly, and that’s the reason at least on my side, because my team is implementing them. And I’m not the V8 team. + +KHG: So, yeah, the -- I think the -- if the -- if it really was just like, oh, we haven’t had a chance to review it or it just hasn’t been prioritized or question don’t have bandwidth to implement, that’s understandable, totally. And we all have our priorities and we’re all trying to get things done. I think it’s more about, like, we have an implementation ready to go, and it’s just not moving forward because it families like it’s being gate kept a bit, I guess. + +DE: Will we hear further feedback from SpiderMonkey or V8 about your prioritization? Because it would be really great and useful to understand, I mean, as the Edge team was saying, whether any data would be relevant for you that we could collect, or whether the browsers don’t want this proposal to proceed or anything more. + +[a long period of silence] + +DE: Well, I hope that in the future, we can be in touch about this. Historically, when we bring something to Stage 3, the assumption has been that’s because as a group, we are prioritizing it, to some extent. I hope that in the future, people can block Stage 3 if they really see proposals as very low priority to implement. I was expecting that Stage 3 would be a sufficiently positive signal. Increasing clarity here in the future would be really good, with respect to this proposal and with respect to future proposals as they’re proposed for Stage 3. + +USA: Kristen, would you like to say any concluding remarks? + +KHG: No I think that’s it. + +### Speaker's Summary of Key Points + +* Decorators is a well received and highly anticipated JavaScript feature. +* Lots of use cases, lots of good feedback overall +* Implementations (V8 and SpiderMonkey) are nearing completion +* No web engine wants to ship first. + +### Conclusion + +* Status quo remains the same, no one plans to ship currently. +* No browser was willing to explain the reason for their deprioritization. + +## Curtailing the power of "Thenables" for Stage 1 + +Presenter: Matthew Gaudet (MG) + +* [proposal](https://github.com/mgaudet/proposal-thennable-curtailment) +* [slides](https://docs.google.com/presentation/d/1Sny2xC5ZvZPuaDw3TwqOM4mj7W6NZmR-6AMdpskBE-M/edit#slide=id.p) + +MG: I want to talk about thenables, and I want to make thenables less powerful. So what are thenables? So this would be an object that has a then property, and so objects that have then properties are treated specially in promise code. And the why of this comes from before my time on TC39, but basically my understanding is that pre-standard promise libraries used to support this sort behavior, and there was a desire to make he’s these things compatible and harmonious, a very noble goal. + +MG: Okay, so what’s the problem? So this is something that I have seen multiple times now. And on multiple teams, and so I want to talk about this. And the problem is that it’s very easy for implementers, particularly in web engines, but I suspect this sort of thing can pop up elsewhere, to totally forget that this exists. It’s the kind of behavior that is subtle if you don’t run into it very often and if you’re not, like, having it rubbed in your face, you can forget about it pretty easily. And so you can accidentally create cases where user code can get executed with where you kind of never expected that to be possible. + +MG: And so an example that I write up here is, you know, we have this web IDL, which is an interface description language for the web. And you can define a dictionary, which is just like a bag of data. And, you know, these things get code generated into nice, like, C++ structures so we can work on them in the C++ side, and they’re great. And then there is a nice, beautiful translation system that translates them into JavaScript objects and back. And cool, everything’s nice and lovely, and so you have one of these C++ structures, and you go, you know, the spec says to resolve a promise with this, so you just go, you know, your C++ version of a `Promise.resolve` on this object, and you never think about whether or not, like, code will actually get executed in script at all, because why would you, because you’re just resolving this C++ thing. And the problem here being that dictionaries, they convert to abouts with `Object.prototype` as their prototype. So when you do that translation from C++ object to JavaScript object, you go C++ to JavaScript, now it’s going to prototype. + +MG: Oh, look, what the somebody put a then property on `Object.prototype`. Accidental user code execution. Something happened that you didn’t expect. Okay, so this is actually -- has actually been something that happened again and again. I didn’t even look that hard to generate this list, and to be honest, I didn’t even both to look at WebKit. There could be WebKit bugs similar to this that I didn’t even try looking for. And this includes, we’ve even had one of these on the spec, or the spec CVE from last year was basically this kind of problem. + +MG: So what do we -- step 1 -- or my Stage 1 ask here is ultimately can we do something about this? And I come hoping that the answer is yes. And I want to present a little bit of some of the design space I see for options here. But the actual ask here is the Stage 1 ask, which is: Do we agree that there’s a problem here, and do we think that there exists a potential solution. Okay? + +MG: So when we were dealing with the spec, CVE, one of the problems -- or one of the proposed things that would happen is we would fix the problem directly, but also pursue a couple of mitigations. Some of the mitigations that came up with were okay, what if we made if object prototype an exotic object and we make it exotically reject object properties, so you clang (?) the defineOwnProperty on object prototype so it silently know ops. Another option was to make some promise resolution functions not respect thenables. It was not super clear which ones we could do, and I think that this would be little bit challenging for an audit. But it does suggest that there is at least some ability for us to address this and there might be some appetite in committee to do this. + +MG: I did want to come with a third proposal. There we go. Pause I’ve been thinking about this for a while. And I’ve been trying to figure out what is a nice answer to this look like. So the third proposal that I would suggest looks something like there. Specification defined prototypes, so this would be like Math and Error and Array and `Object.prototype`, get a new internal slot, and you call them internal proto. So objects that have internal proto, they’re exactly the same as any other object, but we will add then a new abstraction that pass attention (?) to this internal slot. You add that abstract operation, get non-internal, and this get non-internal does the prototype chain walk that you would expect for get, but as soon as it sees that the object that it is going to look at as the internal protoflag will stop and, you know, will just return undefined at that point. We then replace the promise resolution machinery that looks for then in the prototype and say, use this new abstract operation called get non-internal. + +MG: This is nice. It addresses some of these bugs. It fixes some of them. Like, it mitigates the challenges. There’s some advantages. I think it’s a little bit of a more harmonious design than turning `Object.prototype` into some exotic project. As an engine implementers like this because I don’t want it to be in the exotic object. It can also be integrated into WebIDL. So we could change the web IDL spec to say that IDL defined prototypes and classes get this non-internal flag. And, yeah, sorry, it also avoids making `Object.prototype` exotic. + +MG: Is it perfect? Of course not. No, this is a mitigation and doesn’t really fix the whole class of problems of thenables. And in fact, on the write support of the proposal, you’ll see it definitely will address some of them and definitely does not address others. + +MG: Now, I didn’t want to come with zero data. Because I did want to know, like, how likely is it that this could be compatible? And unfortunately, I goofed a little bit when I did this telemetry. So it doesn’t quite answer the question that I was hoping to answer entirely. But what I have is I added telemetry to the thenable paths in Firefox. And the telemetry really collects three bits of data, and it says, okay, did you ever on a page use a then prototype, like, did you resolve an object by going down the path of calling then. Did you resolve the path -- the second bit of data that’s gathered is did you resolve the then from objects prototype, any objects prototype at all. Essentially, is it not an owned property, is the only check. And then the last bit is is it something that is on -- was the then property resolved on a standard prototype. Because this was cursory data and I was just whipping it together for the purposes of roughly this presentation, I used as a surrogate for what is a standard proto, is inside of SpiderMonkey, we have a big enum full of the standard prototypes. And essentially if the prototype that you found the then property on resolves to one of these prototypes, it is, I call that a standard proto. + +MG: This is flawed metric for two reasons. One, I mentioned the idea of trying to fix this for web IDL stuff, and this doesn’t count any web IDL prototypes. So that would be flaw one, it doesn’t co-anything for web IDL, and ultimately would be kind of an under-count. The other thing is that it doesn’t actually address the question which I was hoping to also answer and didn’t realize until I was making this table I couldn’t really tell you, which is if the only thing we the was mark `Object.prototype` an internal proto, give it the internal slot, how often would we run into that on the web? I can’t give an answer to that. I do say that I would probably do that if we got to -- if we got to Stage 1, I would probably add that kind of telemetry. + +MG: The numbers are, well, what I have been learning from telemetry lately is that the numbers never match my expectations, so this is across four days in February. You can see that 2.2% of pages are getting, like, an actual then property. Of that, 2%, so the vast majority of them, are getting it off of a prototype. This probably makes sense. 0.13% are gettings it off of a standard prototype, which if I’m being very honest is quite a bit higher than I was hoping for. Like, on the order of an order of magnitude higher than I was hoping for. I don’t have answers to what kind of pages actually do this, are there real use cases that this is actually impacting. I don’t have any idea. I thought I would bring the data I do have to committee. + +MG: So this is a problem statement more than anything else. I’m not married to any of my solutions. I just wanted to highlight that this is a problem. We’ve seen it multiple times. Across multiple engines, it seems like something that we could do something about in committee. I would love to hear people’s suggestions for other answers, solutions, problems. Heck, even some suggestions for telemetry, like to know to drive this. I’m open to that. But, yes, Stage 1? And I guess questions. + +USA: So before we start with the queue, I would like to remind everyone that it’s a long queue. But, yeah, without further ado, first we have WH. + +WH: I just want to make sure I understand the previous slide correctly. Are you saying that one out of 20 `then` lookups find `then` on a standard proto? + +MG: No, no, this is a percentage. So this is one out of 1,000 pages -- + +WH: Yeah, but the total thenable percentage is 2%, and I’m dividing the two percentages. + +MG: The denominator on these a all the same, and it’s roughly the number of page loads encountered. So on a given day, Firefox loads, you know, whatever billion pages, and of that billion pages that get load, 2% encounter a thenable and 0.13% then count you are then only on a standard proto. + +WH: So 1 in 20 pages that resolve any thenable resolve one on a standard proto? + +MG: WH, that’s not necessarily correct, because you could have more than one thenable on a page load. + +WH: Okay. + +MG: And, like, the -- it is literally just a single bit of information from a page load. It does not have any indication of how often does that happen. If you had a page that, like, put a then on every single standard proto and resolved every single thing, it would still only show up as a count of one. + +WH: Okay, thank you. + +JHD: Yeah, so for -- you talked about three options. I’m just clarifying, if for if first one, I assume we use the AO we added for `iterator.prototype`, that if you try to set on it, it sets, like, where the target is inheriting from object prototype, it would just create a non-property? Because the AO called setter ignores prototype properties. + +MG: Maybe. I pulled out – + +JHD: You just don’t have those details. It just occurred to me during the slides. + +MG: It was really just sort of highlighting -- these were proposals from when we were doing the spec resolution, like the spec remediation, and I thought I would bring them as an example of things that could be done. And I don’t remember the exact a details of how that was supposed to work. + +KG: It does have to be exotic because you don’t want `then` in object proto to start passing. It would have to be an exotic. Anyway, my topic was: I do support Stage 1 for this or exploring this problem area. There’s definitely a lot of space for solutions or partial solutions in this area. I also wanted to hear your thoughts on the object prototype solution. Like, if you proposed this alternative, which suggests that you thought that was a reason to do something else, and I was wondering – + +MG: Generally from an engine perspective, making objects exotic is a pain, because what it does is it means that now you have to special case an object that is, like -- you have to give -- and especially on an object property definition and reading paths, like, making an object exotic has a cost. And, like, `Object.prototype` is a very important object. And so making that exotic feels wrong. It could very well be that we absolutely could do it and make it even work fast. But it just feels like the wrong approach. And also, it does feel a little confusing to people in a way that I feel like the promise resolution, just sort of ignoring it, is slightly less. I don’t know, it feels inharmonious to me, but I -- that is really a gut feeling there. + +KG: That’s very valid. On the, like -- how it will feel to people, my hope is that no one will ever know that we do any of this kind of thing unless they’re already digging around in the guts of stuff, so I’m not super worried about whether something will feel weird as long as you’ll just never run into it. I’m okay with that, like, doing arbitrarily weird things as long as they are, you know, doable efficiently in engines and no one has to know about it unless they’re trying to do something strange like put `.then` in `Object.prototype` in the first place. + +MG: This is where I really wish I had that split out telemetry, where I had split out `Object.prototype` specifically from all of the other standard protos. I did not and I regret it. But you found these numbers to be surprising, and so this is my only other feeling here, is, like, I too hope that nobody runs into this, but into says numbers already surprise me, so I mean, people are doing weird stuff out there. + +KG: Yeah, agreed. + +MAH: Yeah. So we are generally interested in the issue of reentrancy with the promises, and it wasn’t entirely clear to me with the presentation if all the issues that you have found, the CVEs you have experienced and so on, are due to synchronous reentrancy when sending thenables or if they’re just merely the fact that thenables exist and should possibly be adopted. If from what I understand the issues are synchronous re-entrancery handling thenables or have a custom logic during the promise resolve algorithm, I believe we should explore that problem and see if there is a way of having a basically safe promise resolve that is capable of not triggering any user code during that step. This is actually something brought all couple years ago that we were interested in trying to solve. I think this is a problem that is not specific to the spec or web IDL or so on, but is also something that user code may want to protect themselves against, and so I would like to explore the more general problem of synchronous promise reentrancy during -- that is triggered by thenable. That is not actually the only trigger. There is also the constructor property lookup happening during problem resolve. And it’s wider than the then property. + +MG: The constructor lookup thing is actually kind of interesting and I hadn’t really considered that, and I’d appreciate it if you’d open an issue on the repo that mentions that because I will 100% forget by the end of this call. The one thing I would say, and I hope at the bottom of the repo it says this already, but I do recognize that this does also potentially fit into the, like, general bucket of, like, invariant maintenance and opting in and out of things that the stabilize proposal had been talking about. + +MAH: This is independent of stabilize. This would be explicit promise resolve, so anybody that is interested in handling promises without -- while knowing they won’t trigger re-entrancery can adopt that operation. + +MG: Yeah, I would open an issue. I think that’s a good point. The one thing I did say is, like, it kind of feels like this internal proto thing is the kind of magic that you could imagine wanting to give users access to via the stabilize proposal. You know, terminating the lookup for this sort of thing. But I’m, as I said, I’m very not committed to any particular solution. I’m more just irked by how many CVEs this has caused and would love us to come to a solution that, like -- as I said, it doesn’t have to fix every problem. But if it makes this twice as safe, that would be great. Like, it just makes everybody’s lives a little bit easier if we can try to do that. + +MG: And the other thing I should mention here that I didn’t put in my slides, there’s also a possibility here that we just decide that maybe TC39 isn’t the right venue for this and ultimately this is a problem that could be solved or should be solved by the web IDL spec and we could talk about that as well. And that is getting this out of TC39 is also an option, but for myself, this is a problem that seems relevant at least to the people in this room, so I thought I would bring it. + +MAH: And I want to reiterate, very much interested in the general problem, that I would like to generalize the problem to just -- not just web IDL and gen implementations, and general how you handle promise objects safely without having re-entrancery. + +KG: So I want to -- this isn’t -- most of CVEs that I’ve seen aren’t about re-entrancy of problem objects. They’re about things being unexpectedly being treated as thenables when they weren’t intended to be thenable as at all. It’s not like you made something which you were expecting to await into a non-native promise, and that’s done something weird. It’s like the example that came up recently was the iterator result objects that are returned from async generators that have value and done and inherit property prototype and those were unexpectedly, you could make those into thenables by putting `Object.prototype`, but they weren’t intended to be promises at all, so it wasn’t exactly promises being reimplemented that was the problem. It was things unexpectedly being thenable that was the problem. Which is a slightly different issue. Also I want to point out, I put this in the Matrix, but in case you don’t see it there, there is thenables proposal by Justin, who hasn’t been participating much, called faster promise adoption that touches on some of this stuff, and I think for the specific problem that the constructor check, there’s a possible solution in that repository. It doesn’t have any actual overlap be this proposal, but is in a similar area. + +MG: Okay, thanks. I can’t see anything except my slides right now, but I will look when I’m done. + +JHD: So I have a couple things. Real quick, I just wanted to ask about the telemetry. It sounds like you said this is just a single bit of information, but is it possible to have more information, like, which standard proto object was it or, you know, things like That? + +MG: All things are possible, it depends on how much work you want to put in. In this case, I was taking the easy path, which is the -- what we call the use counter path, which is basically you name some property, and then when that property happens, you say, hey, it happened! But unfortunately it takes a bunch of overhead in order -- it’s a lot of writing of code in order to get this to work. And so adding the “hey, this happened and it was this thing” is a little more challenging. But what I would do is I would just take this particular bit and split it into two and say, okay, it was on a standard proto and then on object.then to give me one more, like, piece of insight. Longer term, if we do actually, like, want to pursue an idea where we’re actually really, really concerned about webcompat. I could start plumbing into into the more complicated bits of where do we see 24 and what are the paths that are being monkey-patched. It could do it, but it takes time and effort and this was supposed to be quick, I did it in an hour and a half. It was not intended to be bulletproof, you know, inarguable stuff. + +JHD: Okay, thank you. That clarifies, so my queue item was that it’s -- it sounds like you said part of your interest in option 3 was that it avoided making `Object.prototype` exotic, but if it has that slot, it’s exotic. So it doesn’t seem like it avoids that. And then, separately, if objects that have slot, there does need to be some sort of way to check that they have that slot, some form of brand check no matter direct or not. I’m not sure – + +MG: From the specification, it becomes maybe and… From an implementation standard, it becomes a very easy check of, like, I am walking the protochain. Is my object `Object.prototype`? Stop. The implementation does not have a real reified internal slot. It’s a fiction to actually talk about this. That’s it + +JHD: So obviously the -- this is -- the details of this are Stage 2 stuff. I wanted to raise the thinking that if you’re just trying to refer to, like, the current realm’s `Object.prototype`, that’s fine but if it’s a cross-realm slot thing, then it definitely makes it commotic (?) and needs some sort of brand check. But either way, I agree with everything that’s been said about Stage 1, whether it’s for the problem of promises or even the more general problem of re-entrancery and evaluating user code. But I think regardless of pursuing this, it seems prudent for web IDL to consider producing null objects instead of standard object because – + +MG: I think that ship has sailed too far in, like, that particular ship is gone. I would be shaken if that was web compatible. + +JHD: I mean, perhaps only for new objects it produces. But it seems worth not -- since web IDL itself is just a spec document, it seems worth trying to stop the bleeding if there’s something that is subpar in it. And I still think we should be pursuing this problem here, but just in parallel, that’s my suggestion to consider that. + +MG: Yes. + +JHD: I’m done. + +LCA: Yeah. On the use counter think, I think Firefox does not track what pages it actually saw the use counter increment on. But Chrome, for example, does, so if you have a use counter to Chrome, it would give back the list of pages that use counter actually hit. And then you can do more investigation to see, like, what actually is happening while looking at the source code. So maybe it’s – + +MG: I would love it if -- I will probably not hack in use counters into V8 for the purposes of this. But I would love it if somebody else did, especially given that that happens. That seems nuts to me, but, yes, that is a challenge that I have right now, I can tell you that there are these 0.13% of pages that load and do this thing. And I cannot tell you what they are. And, like, I have attempted to find some by, like, rummages around on the Internet with an instrumented browser myself to try to find them, and I have yet to do so. + +MG: I was surprised to discover that YouTube apparently used like an actual thenable in the middle of loading. I don’t know what for, but it does. But that, hmm, that’s all I can say right now. + +USA: Next we have a reply by DE. + +DE: So I’m a little bit skeptical of this assertion that there must be brand checks for anything involving an internal slot. I agree with you for a lot of the brands that we add, we should make -- check predicates for them, but we as a committee have not adopted on overall stance on this. + +JHD: That’s incorrect. In beginning of 2015, when I proposed trying to remove toStringTag from ES6, the committee had con sen they would not remove it but all built ins would, as they currently did, and we an oversight about error and argument objects, but -- and moving forward as well, all built ins would have brand checks moving forward. And so we have maintained that for all new things that we’ve added and that’s also part of the motivation for `Error.isError()`, and as far as I’m aware, there hasn’t been consensus to change that consensus. + +DE: And different people have different interpretations of what happened then. + +JHD: Certainly. + +DE: And I think before asserting that the committee has a policy, it would be good to do, as YSV proposed a while ago, proposing for a consensus like I have an the agenda for a different design principle, a particular design goal for the committee. And until then, I think any assertion that something must be some way would be better to stay -- I would like it to be like this, because – + +JHD: I did not say that the committee has such a policy. + +DE: You said it must be this way. + +JHD: Yes. Implied is because I feel it would be this way, and I would object if it were not, as everyone else in this room has today and will continue to have whenever they have an objection. I appreciate your note on my wording, and I do agree that having such a design goal document would be helpful. + +USA: So we’re at time. With that, MG, would you have time to stick around if we make an extension, would but interested, or do you want to come back to this later. + +MG: How much is left on the queue?. + +USA: There’s seven topics on the queue. + +MG: I have some time. Like, another 15 minutes. But that’s about it. + +USA: All right. That gives us roun – + +MG: One second. I shouldn’t speak before I know a certain—yes, I have some time. + +LCA: I support Stage 1. I think it’s great that you’re doing this investigation. And this would be outside of JavaScript and any polyfilled built ins would not be able to set this which wouldbe kind of unfortunate. And then additionally a lot of—at least in Node.JS and Deno and specifications with web IDL happened in JavaScript itself so it becomes not impossible but very annoying to have to set this flag on objects that aren’t actually created in JavaScript. So I’m just somewhat concerned about this option 3 unless there’s also a way to set this flag from JavaScript that then probably is closer to having a `symbol.thenable` method or something. + +MG: I haven’t thought about polyfilling it all. That’s an excellent point. I encourage you to open an issue on the repo so I don’t forget about it. As I said, not married to any solution. So I sort of don’t love the idea of getting another symbol, but I see your point about polyfilling. + +USA: In the queue we have SYG. + +SYG: What was my—yes, given that for the import defer thing we are already specific casing then, that says to me that we have some precedent for considering then a special evil that might be worth special casing. So while I agree with—I also hate the reentrancy problem and love to solve that and against the solving reentrancy and the bugs that I have seen that is surprising and not so much free entrant but because it lets things that aren’t promise shape that flow to things that aren’t promise shaped and that is the source of the bugs. User code in general in my problem the problem is not reentrancy but once you go to user code it invalidates your assumptions and loading things from certain slots and the shape of the thing that you’re expecting. I would like the goal for this proposal to be about preventing that class of bugs more than preventing the general class of reentrancy bugs. I support Stage 1. + +SYG: I guess the point is that I don’t really have qualms if it comes down to it if we think the most bang for the buck is special case then and something super weird and special case then and that case of bugs I’m happy with that. We’re already doing it with import deferred with the name space object act very weird in that one case as well. + +MG: Okay. I agree. + +MM: So you’re right, defer does special case then but it’s a very, very contained special casing in that the purpose of defer is to postpone when the module is getting evaluated and then to evaluate it on need, and the special case for then is just the special case about how early the buy need happens. It’s not a special casing that’s going to surprise many people but you’re certainly correct. I See NRO will want to clarify. Please do. + +NRO: So the special case with then in the proposal it’s not actually special casing how promises work, it’s that deferred main space objects don’t have the then property. That’s the only special casing. The eventual model and the object doesn’t have a then property. It avoids this problem but making sure it can never happen. + +MM: Thank you. I had missed those details. I think that makes the case stronger. So I am very interested in the reentrancy. I take seriously the point that KG made that a lot of the CVEs here are not about the reentrancy. Nevertheless as a Stage 1 proposal, I would very much appreciate the goals stated broadly enough that if the reentrancy problem that can be addressed pleasantly. I believe it can and if these other approaches all turn out to be infeasible as I suspect they are, we saw what we can and compatible with the Stage 1 problem statement and include the possibility of checking with the reentrance. + +MG: My preference is to keep it narrow. But I don’t really have the whole like what is if reentrancy problem look like and what the scope of solutions look like in my head? This is me say I don’t know yet. If someone can open a clear issue on the repo, I can think about it. I’m willing to keep pushing on this. If there is a nice harmonious solution that kills two birds with one stone, cool, I’m totally down for that. My preference being I would like to address the concrete CVE problem-solving problem and if the reentrancy thing can’t be done ina nice way with this, we should figure it into a separate proposal and figure it out that way. But in the short-term, I’m totally fine with piggybacking for now. + +GCN: I’m curious what the scope of this proposal is defined to be. I’m generally in favor of curtailing the power of thenables. The first proposal I made for TC39 was in that vein. I don’t understand is this proposal specifically about the promise resolve operation or more general? What is being targeted here? + +MG: My goal is basically I would like to and has the ideology of `Object.prototype` it doesn’t invoke that thing because web authors like—engine implementers forget about that behavior too often and this leads to bugs where an attacker is able to do something like force a GC to happen inside of the then and then it returns to executing code inside of the C++code and objects disappeared and then you get problems. That’s really my high priority scope. That’s what I would like to address. I proposed a slightly more general solution because I think there is a nice harmonious design in it. I’m worried from the telemetry numbers it may not be as web compatible as I hoped. There is some talk about the reentrancy problems with promises that I can’t really speak to without having more time to think about them. But the most concrete thing that I want is like `Object.prototype`.then should not cause some random object to become a thenable and invoke user code. + +GCN: Okay. I will—just as an example of something that I assume is out of scope of this, when you dynamically import a module and write `export function then` inside of the module, that function is called, I assume that is an example of something we won’t attempt to fix in this proposal. + +MG: No. + +GCN: Under any proposed solution that could direction that that could go, so basically what I just want to understand is there are a lot more weird examples like that which things are in and which things are out. + +MG: Mostly I would say like all of that module resolution machinery I would say is out. But this comes down to like is there a harmonious design that will be efficient to implement that will address the problem and ideally, you know, address like user’s expectations a little bit? I have no idea what people would expect if you exported then from the module. Is that expected? Is that something that people are running into it and it’s biting them? I have given it zero thought. But if it is like an instance of this more general class of problem, sure. I think we can shave this down a little bit over time as we potentially try to get to Stage 2. + +USA: We have a point of order that five minutes—actually four now remain in the time box. Next we have DE. + +DE: I want to request a couple more minutes to the time box just to get through the rest of the queue or otherwise we can raise—so I have a couple other ideas for how this could be resolved. If in the option 3, we care mostly about promises created by web IDL, web IDL could create a non-writable and non-configurable that is the original then method and make that from the internal slot or reading it from the later point. That’s one thing to consider. Another one if we want to solve the more general problem of things being unexpectedly thenable would be to make it so that the look up of the then property is a special look up that doesn’t bother to do a read if it’s the original object prototype of the current realm. I guess this would benefit from having your further web—this telemetry data that you have here. If that were expanded to, for example, anything with a null prototype gets skipped, that would solve the module things. I think that’s late for that web compatibility-wise. People are excited about that pattern for modules in particular. Maybe using it. Anyway, I’m very happy that you’re investigating this, you know, subclassing built in was kind of a mistake. I’m glad that we’re undoing it where it causes especially big problems. + +USA: We have a topic by RGN. + +RGN: A follow up to SYG’s last topic, he said he didn’t want to focus on reentrancy as the problem and went on to describe a scenario that to me seems more general than reentrancy. We often use that term as a shorthand but the reality is it’s really about an interleaving where code runs that other code wasn’t expecting and can have effects. So in that case, the boundary was from implementation code to user code. But the same kind of interleavings affect user code to user code. I’m a little hesitant to carve out the narrow space of “reentrancy” when we really are talking about a class of problems not just analogous but in fact more broad because non-reentrant code can still have effects at a distance. That’s exactly the kind of thing that we hope to avoid. + +SYG: Sorry. What was the question? + +RGN: I guess I’m looking for a clarification of how you think that the scenario you described is different from this generalization of reentrancy. + +SYG: So I think MM’s response I agreed with, which was that—so my concern is that I think solving the general reentrancy problem I considered to be harder and I have a much clearer idea on what that means and what the timeline there is. On the other hand, we know that the thenables corner is a sharp corner for security bugs, so the value here for this proposal I would like to be, you know, if you have to choose between solving the general reentrancy problem and solving it for this thenables corner that we keep getting bit by, I would like to prioritize solving that even if we couldn’t solve the entrance as part of this proposal. + +RGN: What if this doesn’t have to make the choice? + +SYG: If it doesn’t have to make the choice, that’s great. Mark said if you could find a harmonious way that solves two birds with one stone and I would like that. And I would like the user code interleaving problem. My hunch is that it might be more tractable, the thenable problem in itself is probably more tractable than the general problem. If it isn’t, that would be great and that is win-win. + +RGN: That response is helpful, thanks. + +USA: Next we have Chris on the queue that supports Stage 1, spending time exploring the problem space. And that’s it. So MG, would you like to ask the committee for consensus on something? + +MG: Do we have consensus on Stage 1? Sounds like the answer is yes. + +USA: I think so as well. Let’s give folks a few minutes to respond. If they have any other comments. + +MM: I just want to confirm that we are generalizing problems and considering the problem statement to be general enough to cover the reentrancy? I support Stage 1 with that understanding. + +MG: Yeah. I’m willing to look into it more and then we’ll look at the problems—we’ll look at the set of solutions we can come up with and see if there’s a middle ground and we’ll go from there. + +MM: Okay. And as you suggested, we’re perfectly happy to do this and continue a discussion. + +SYG: Sorry to interject. I was typing something. MM, I want to double check and back and forth a few times now. I want to double check if as part of the exploration there does not exist a good solution for the reentrancy problem and this problem, that doesn’t tank this proposal I think is worth solving this proposal even if we after spending some time don’t find a good general solution to reentrancy. + +MM: If this proposal could be solved in a way that is worth the cost, the existing approaches that were mentioned, none of them seem feasible in terms of regularity in the language but this is a Stage 1 exploration, so even if the reentrancy is not there, I don’t think I would block Stage 1 based on the infeasibility of the concrete approaches. Because if the problem can’t find a pleasant solution, that would be fine. + +MG: I want to look at the more concrete—sorry, the broader case mostly because I don’t have a good definition of all of these pieces in my head right now. People are using “reentrancy” in ways that I don’t think match how I think of reentrancy and I need to read some background here. I’m willing to approach it, but I did say my priority is let’s make thenables a little less power. If it helps with reentrancy. Cool we can put it in this proposal and take a look at it. If it goes badly and we can’t find a harmonious solution, I would like to split it out. + +MM: In response to SYG’s question to me, I think there might be misunderstanding. I’m not saying there’s a general solution for the general problem of reentrancy for the whole language. That would be very—I would be flabbergasted. Different reentrancy problems call for different solutions. What we’re suggesting is there’s a more feasible, more constrained approach to promise reentrancy than the ones that were concretely suggested in the proposal. And that’s specifically having to do with the safe form of `Promise.resolve` and fixing await to use the safe form. Obviously I will be clarifying that on the issue list. But that’s the only sense in which it’s a more general solution to reentrancy. It’s not solving reentrancy problems in general. + +SYG: Got it. Okay. I think that satisfies me. Yeah, I think all I was saying is don’t let perfect be the enemy of good. This is a real problem. + +DE: I just want to agree with what SYG was saying, in particular reducing the likelihood of CVEs is worth a lot and if that means that we end up having more complexity, I think it’s worth that cost. So MM was saying he found it unlikely that something would be worth this complexity cost if—actually I wasn’t sure under which conditions. But I would be okay with taking something that’s a bit messy if it reduces the likelihood of CVEs. + +KG: Strongly agree with Dan. The cost of CVEs is paid by users of the web. The cost of the language being a little more complicated especially in the weird dark corner that no one looks like is paid by only the people in this room. + +USA: That was all of the queue. Mathieu, do you want to respond to that or make any final remarks before we move on with the – + +MG: No. I’ve already overshot my time box extremely badly. I would prefer to stop. + +### Speaker's Summary of Key Points + +* Broad support for making some kind of change here, even if it’s a bit messy and unprincipled, if this fixes the risk of vulnerabilities. +* Some interest in more broadly attempting to solve promise re-entrancy. Matthew is OK with taking look at this as part of this conclusion, but some on committee also would prefer to not stop “good” for “perfect”. + +### Conclusion + +* Stage 1 achieved + +## `Math.clamp` for Stage 1 or 2 + +Presenter: Oliver Medhurst (OMT) + +* [proposal](https://github.com/CanadaHonk/proposal-math-clamp) +* [slides](https://docs.google.com/presentation/d/14QGuyCHlsSr4ZSCkbFuaFZk8793EAMS1nAkdW_csLhA/edit) + +OMT: I would like to propose adding `Math.clamp` to two numbers. Mostly because it’s in many codebases and not needing the boilerplate would be a better experience. It should also improve performance. Instead of having like the example shows `Math.min` and `Math.max` it should have one call. And hopefully that’s a single one and help with optimization. Other languages all call it clamp. There’s some arguing on the arguments where it should be min,val,max. And there’s an NPM package called clamp that is referring to a delete and lodash implementation mostly gets over a hundred thousand. These are used by min, max. And learning from these, the name is essentially standard. The name is the arguments. + +OMT: So propose doing val, min, max for now. No coercion as the point of modern proposals go for. If the limits are not a number for integer just return it. It doesn’t comply with the suggestions but it makes sense with the functions. + +OMT: So I propose moving to Stage 2 which might be a hot take. But I think it matches the process because there’s a preferred solution. I think the language should have this as `Math.clamp`. The design may change significantly. That’s all out of Stage 2. There’s already a spec text and proposal document and everything. I can share the spec text. + +USA: The spec text is visible. + +JHD: I definitely support this. I actually already reviewed the spec as well. I volunteer to be a reviewer if it were to achieve Stage 2. I think this is great. I think if the only concern from the room is the argument order, that’s definitely something to be resolved within Stage 2. My personal preference on it is what it is right now, because I don’t have any familiarity with using it in CSS any way and everywhere else I have seen it on a computer in my life it has been in this order. That’s also the way I describe it in English, clamp X between Y and Z. And that’s it. + +NRO: So I support this for Stage 1 or even 2, I guess. But for the order of the arguments, I think we should try to match CSS more than what others are doing. Nobody would be using the levers anymore while people will be writing clamp for JavaScript versus CSS for the application. It’s better to have two function the same than being on the single platform be aligned than not. + +??? (unknown): I see the confusion. Would you still support this for Stage 2? + +NRO: Yes. + +??? (unknown): Thanks. + +LCA: I think we should—we have a bunch of comments on the same topic. We should do them all in this topic. All other program language use val, min, max, we should not diverge from that because CSS does something weird. + +MF: I support CSS order. I also don’t think we’re going to come to an agreement on that. Back to my point, I think we should explore the prototype method that was suggested in some of the issues. Like `Number.prototype.clamp` where the this value is the target and then you pass a min and max that hopefully we can at least agree on will be min first and max second. Though I’m not sure at this point with how the conversations have gone. I still think Stage 2 is appropriate even with that level of design change still up in the air. So I would not oppose Stage 2 advancement. I would like to at least during that stage see that prototype method explored a little bit more. + +SFC: Yeah, mostly just to echo what MF just said, like, I think the slides—OMT finished the slides in two minutes and asking for Stage 2. There’s still a lot of design space here. Prototype function is one of them and NaN handler is another and ordering is another. I mean, I think it’s a fine thing to do. The motivation is basically like, well, look, there’s all these other libraries that do it so therefore we should do it which is usually fine. It seems rushed to skip to Stage 2. I won’t block Stage 2. It seems like there’s still quite a bit of design space. + +OMT: I agree with the design space. My main argument is that according to the process document, that’s fine for Stage 2. So as far as I know, Stage 1 is like deciding on the problem. I think always everyone in this room agrees that having it make sense this is not doing anything. + +WH: This mostly looks good. There are two controversies here. One is if the value is NaN — I think that returning NaN is the right answer, but we should discuss that. The other thing that bothers me more is that `Math.clamp(x, 0, -0)` throws, which seems strange since +0 equals -0. + +OMT: I originally read the spec text during Tokyo and I think I spoke with Troy (?) and someone else. + +WH: Line 6 of the algorithm on the currently displayed slide. + +OMT: I think that was decided to avoid confusion. I’m open to changing it if people think it’s better. + +WH: None of the existing implementations would throw in that case. The result should just be, I guess, +0. + +OMT: I’m looking to do that. + +SFC: I just less than operator semantics seems probably what we should follow in terms of minus and plus zero. If we want to be stricter, we could. + +EAO [on queue]: + 1 for Stage 2 + +SYG: I don’t really have any complaints about Stage 2 here. But I do want to express—urge caution I suppose about the faster point. I’m skeptical in production engines that this will be meaningfully faster. Probably not until you hit the optimizing tier. Even if the optimizing tier there may be—you could do a bunch of stuff today if you see maxorf min at that point. So I still think it seems a good thing to have given that it is a stand-alone operation that’s easier to read intent into than max and min. So that’s fine. I don’t want to oversell on the faster bit. + +OMT: Yeah, I agree. I just view it as a potential bonus. I don’t think—it’s not a potential downside. + +KM: I agree. I think other engines probably will see through this and remove it or convert it into the same, the optimal code. + +KG: I want to call people’s attention to the NaN issue. What do you with NaN for these inputs. It’s contentious that across other languages that NaN for the value argument just means the result is NaN. There’s not nearly as like consensus for a NaN for the min and max arguments. I kind of prefer throwing because I like rejecting invalid values but I see the case for putting just returning NaN in those cases as well because that better matches what you would be doing otherwise. I just want to call people’s attention to this question that we can absolutely resolve later. + +KM: You do any validational stuff, I think once you have NaN and stuff, like, I agree probably wasn’t you start drawing, you might lose a lot of your performance because you do a bunch of checks that you just allow for weird behavior, then probably what people would write most of the time, then they care about the performance, they would—this is going to be slower than that. + +MLS: A way to do reply, KM did the reply and got both my points almost completely. There’s seven checks for exceptions and that’s a lot of coding you need to do to make that happen. + +DE: This proposal seems very useful for everyday coding. The details we’re talking about are important to iterate on and Stage 2 I think makes sense as the time to iterate on them. I have other opinions but they don’t matter that much. + +MM: Just bringing up that the notion of clamp makes as much sense for BigInt. That’s not an objection as well. I support this going even to Stage 2. I thought I would raise that to get your thoughts. + +OMT: I was talking to JHD. There’s an issue in the proposal repo about BigInt. It’s more of a question of functions do it so would that—I’m definitely open to doing it with consensus. + +JHD: I mean, in general there’s the contention around that is that some of the math methods should work for BigInts. Some of them obviously can’t. Everyone is not equally convinced that we should be—that some should support BigInts and some shouldn’t. Some think it should be all or nothing. + +MM: I have the same misgivings. I support Stage 2. + +SFC: I agree it makes sense for BigInts and also make sense for decimal and all of the type. Does it make sense for dates and Temporal objects? Does it make sense for anything else that can be compared in the language? It’s an interesting question to ponder. But I think a question that is important to think about is let’s just say we limit it to numbers and BigInts. Fine. Now we have to do a brand check in the `Math.clamp` function. If we have prototypes we don’t have to do that type of thing. That seems like a design question that should be answered quite early on in the design process. + +MLS: Another type we should probably put on the prototype for those types instead of putting it as dot because math is good for numbers. + +DE: As MLS said, `Math` is for Numbers. We explicitly decided during the design of `BigInt` not to extend `Math.min` or `max` to BigInts. The idea is you’re not doing generic programming over different numeric types. Anyway, I don’t mind the idea of doing it as a method on `Nummber.prototype` but including it in `Math` would also be quite consistent with the design of the rest of the math names base. I think it does make sense to start with only numbers for this. It’s the most useful basic case. + +JHD: I want to add the benefit of putting it on the prototype, it does resolve all of the ordering questions. It makes the order the same as this actually because the receiver is the first implicit argument. So that might be an expedient path for the proposal. + +SFC: This is a slightly bigger one. The slides were very thin on motivation, there is one slide basically said look there’s like these languages that implemented and there’s these NPM module with the downloads. What is the actual use case? When should I be using clamp? A lot of times when I’m using clamp, I actually don’t want to clamp. I actually want to maybe take a number and put it on the distribution or something. Like maybe I want it to—if I’m trying to clamp between minus one and positive one, maybe I want to get a value that is 0.99 something depending how close to one it is. I don’t know. Clamp is a very easy tool to reach for. I can see the argument that it’s useful in a general purpose programming language, right? If it’s like a mathematical operation, if you’re doing it on floating point numbers, like, is that always the right tool for the job? When you put something into the standard library, like, we should put things in the standard library that are like the correct tool for the job. I think that’s the principle that we held with Temporal that we’re looking at with decimal and other things. Like, we’re trying to nudge developers to do the right thing, right? Just because the clamp module in NPM is popular does not mean it’s the right thing to do. That’s something that I would really like to see in these slides. Like, technically speaking by the TC39 process, that’s a Stage 2 concern. Like, you know, answers to that question we should answer before we say we’re committing to adding the feature, right? I understand there’s… + +SFC: These are showing me code examples. But what is the—again, it’s showing me look at all of these examples of modules that do the thing. Here are some code examples. But it’s not really showing me, you know, what problem I am trying to solve. That’s a different question. This is good evidence + +OMT: I agree that it could encourage bad usage but at least personally add to the quotation just for the general purpose clamp, probably double digits in the past five years. That is motivation. I agree it is nice to get some concrete uses of like here is why a clamp is useful. + +WH: SFC, if you’re proposing `Math.clamp` turn out-of-range values into 0.99 when clamping to the interval [-1, 1] depending how far out of range they were, this will violate one of two math principles. One principle is value monotonicity and the other is that values between min and max should not change. You can’t have these two true at the same time and do this kind of smoothing. So I would not support extending this to something more general. + +SFC: I didn’t mean to propose that. What I’m saying is there’s certain cases where I want to see the use cases. Might be some cases where these semantics are the correct ones to apply. I’m saying there’s maybe use cases where these are not the correct semantics to apply. And some other semantics are the ones that are actually correct. But this is an easier function to use, it’s just like right in front of you, you might choose to use one even if these are not the right semantics. Are these the correct semantics 90% of the time? 70% of the time? Or is it 20% of the time? That percentage should also factor in whether we should add it to the standard library. + +DE: This is a very simple proposal. I think that is a good thing about it. And I think the analysis to determine how to work out all the cases, that’s important analysis. But it’s also relatively simple. Overthinking it, or prematurely generalizing it, won’t necessarily lead us to better results. + +SFC: I totally hear what DE said. I also think it is our responsibility to do that leg work. The most simple proposals are good. Simple proposals are not always the correct proposals. So a simple proposal for Temporal would have been to just have a `Temporal.Instant` type. But we ended up spending a lot more time to figure out what is the actual thing that solves the real problem that developers have? And looking forward to, like, there’s a lot of simple proposals that I would love to just add but the simple proposal is not always the right solution. Sometimes it is. Sometimes the simple proposal is the right solution. I think later in the agenda we have the stable formatting update and we’re proposing the simple solution despite some flaws it has because we think the right solution. This is a question we should answer. It’s our responsibility to answer it. If we’re just publishing a library on NPM, the bar is lower. As a committee, the bar should be quite a bit higher. + +JSL [via queue]: + stage 2… think we can/should have a separate discussion about Math support for Bigint. Definitely needed + +EAO: It’s come up a couple of times here to consider adding clamp on the `Number.prototype`. I just want to note that we have nothing like clamp on `Number.prototype` and the functions we do have as methods are almost always something to something producing the string and starting to add methods on `Number.prototype` would I think be a much different bigger change than this little proposal is. I like this thing. I think it should go to Stage 2 as it is. + +SFC: Just sort of summarize what I had said before in terms of—I have concerns about Stage 2. It sounded like a lot of support for Stage 2 in the room. I didn’t discuss this with the other Google delegates. Given that I don’t think I have authority to have Stage 2. The concerns I voiced. Stage 2 is okay but I have concerns of that. Thank you. + +[Consensus?] + +NRO: If we are not sure about Stage 2 given that we have some somewhat significant design space when it comes to prototype versus static method, we’ve been with clamp for many years and one more meeting to get to 2 won’t in any way – + +OMT: I’m happy to go to Stage 1. I think strong enough concerns were raised. + +CDA [via queue]: +1 for Stage 1. Indifferent on Stage 2. + +KG: Just the same thing. Feels like there’s a fair bit of design space left over to be going to Stage 2. I’m happy with Stage 1. + +OMT: I originally proposed Stage 2 because I didn’t consider—why we have the committee. Is there consensus for Stage 1? + +DLM [via queue]: support Stage 1. And share concerns about Stage 2. + +LCA: I’m not sure why we don’t consider that Stage 2 is still reasonable to figure out the exact solution. Reading the process document, Stage 1 the committee expects to devote time to examine the identified space and full breadth of solutions and cross cutting concerns and the outcome should be particular space for solution and solution space and I think that is done and Stage 2 is they chosen the solution space and the design is draft and may still change significantly that is exactly what this is. I think by the process, this is—we are in agreement for Stage 2. I don’t understand why for the people who are not in favor of Stage 2, could you clarify what makes you think that this should be in Stage 1 and not in Stage 2. + +USA: We have a couple on the queue. Be quick. + +JHD: Setting aside the spec text tweaks like the NaN stuff that would often happen in Stage 2, the only two possible shapes are whether the heading where it says `Math.clamp` on Step 1 or section 1 whether that says `Math.clamp` with three arguments or `Number.prototype`.clamp with two arguments and something would be the this in the method, I don’t think we ever considered the location of the function as a major semantic before. I would agree with LCA this is ready for Stage 2 even if we have to have this location discussion. + +KG: I mean, I guess it just depends on what you consider the location of the function to be major semantics. I think for the larger proposal we probably wouldn’t. But since this proposal is so small, this feels like basically the whole content of the proposal is still to be decided. I don’t feel very strongly about this. I think that when we say we worked out all of the major semantics, I usually take that to include stuff like are we adding a new `Number.prototype` method or not? That feels like a big question to me. Again, that’s just vibes. I don’t feel super strongly. If people want to go to Stage 2, I’m fine with that. It feels like a large thing to leave open to go to Stage 2. + +LCA: I want to reply. Sure. The process document specifically says in Stage 2, work out minor API details such as API names and I think this could very well be considered like the name of the API. Is it on math or prototype? + +KG: It doesn’t feel like the name to me. The placement of an API is a bigger question than its name. + +SFC: The prototype versus static function I think I can see an argument either way about whether that’s Stage 1 or Stage 2 concern. The thing I consider more Stage 2 concern is when we say Stage 2, we say quote the committee expects the feature to be developed and eventually included in the standard end quote. That means we agree we want this in the language. But I heard two threads that make me question whether we agreed on that. The one is this more performative and MLS said is this more performative than `Math.min` and is this for the job? Those are concerns that should be resolved before we go to Stage 2 as far as I’m concerned and the thing about where does the function live is like could be considered either way. I hope that answers the question. + +DE: I’m wondering does anybody else have further requests or anyone who spoke already have further questions for OMT’s next stage of research? I think we are doing a lot of good ideas, but some of the requests were kind of open-ended. If you have any specific research action items that you think should be taken would be good. + +OMT: I was going to say I guess the original reason I pushed this to Stage 2, I didn’t consider this. More than happy with Stage 1 today. Also please file issues on the proposal. + +USA: Sounds like the committee is overwhelmingly in support of Stage 1. Nothing else on the queue. So you have Stage 1. + +### Speaker's Summary of Key Points + +* Generally agreed having a clamp function is good +* Concern for stage 2 raised over `Math.clamp` or `Number.prototype.clamp` +* Order of arguments was mostly agreed to be (val, min, max) +* Further research into specific usage was suggested + +### Conclusion + +* Consensus on stage 1 +* Decide upon Math or prototype before advancing further + +## Immutable ArrayBuffer + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-immutable-arraybuffer) +* [slides](https://github.com/tc39/proposal-immutable-arraybuffer/blob/main/immu-arraybuffer-talks/immu-arrayBuffers-stage2-7-as-presented.key) + +MM: As you’ve heard me ask before, I would like permission to record during my presentation that includes audio QA that happens during the presentation itself, and then at the end of the presentation, when I break for questions, then we will stop the recording. Is that okay with everyone? Does anybody object? Okay, great. Go for it. + +MM: Okay. So last time, we got Immutable ArrayBuffers to Stage 2. Thank you, all. We’ve been working hard on it since then, and I want to, this meeting, try for Stage 2.7. We’ll give you a status update and tell you about what’s happened since we got Stage 2. + +MM: So recap, last time, this is the proposed API change as of the Stage 2 request, which has two new members. A transferToImmutable method, and an immutable accessor. transferToImmutable method produces an ArrayBuffer of the immutable ArrayBuffer flavor. The immutable accessor of course is true for an immutable array and false otherwise. Still recap, this is, in some sense, the punch line of the proposal, which is: the immutable ArrayBuffer enables freezable TypedArrays, part of the proposal was a change to spec text for TypedArrays such that during the construction of a TypedArray on an ArrayBuffer, if the ArrayBuffer is immutable, then the indexed properties created on the TypedArray are created as non-configurable, non-writable data parameters. Whereas otherwise they’re created as configurable data properties, I think configurable writable, that cannot be made non-configurable. So with this change, the TypedArray as a whole is still born not frozen because it’s extensible and you can add the properties to it, but it means that you can freeze it. It was the previous reusable of the properties to become non-configurable that prevented the freezing of TypedArrays. + +MM: Last time, this is the road to Stage 2. We got all of these. To get to Stage 2.7, the most important thing is resolving all the normative issues. So there were three normative issues that we resolved and closed, which is: should transferToImmutable take an optional newByteLength argument to parallel the transfer method and the transferToFixedSize method? We decide that it should. That’s a change. Should the new accessor be named immutable or mutable? There were interesting arguments each way. We resolved immutable, which is what the Stage 2 proposal already was, and we did that for easy upgrade, basically for feature testing. That way if you write code, you check if thing is immutable and you’re on an older version of JavaScript that does not implement the proposal, then the answer will be falsy. Then should we add a method sliceToImmutable by a analogy with slice, and the reason that -- the strong motivation for this is without heroic implementation tricks, if you have an immutable ArrayBuffer and you do a sliceToImmutable on it, it can give you back a new immutable ArrayBuffer with zero copy, given that the original was also immutable. And that can be a window into it or -- and then that enables you to then, as we’re going to get to structured clone, transmit that between agents, both zero copy, but without giving the other agent access to more than the window in your slice. + +MM: Okay, there is a fourth normative issue that we put on the table last time, which is order of operations, which includes when to throw and when to silently do nothing. We purposely did not close this, although we wrote the spec text for concreteness using our preferred solution, but that is not a strong stance that we’re taking. We’re leaving this purposely open because we want to guide it primarily to implementer feedback. If an order of operations allows implementers to do some simple and high speed thing, and another order of operations interferes with either existing implementations or optimization opportunities, we want to take all of that into account before resolving this issue. + +MM: So with those three issues closed, transfer to immutable has a new optional length argument, parallel to transferToFixedLength, sliceToImmutable looks like slice, except it produces an ArrayBuffer of the immutable flavor, and we did not change the name of the immutable accessor. The corresponding immutable ArrayBuffer flavor is much like what you saw last time, but of course extended with the new sliceToImmutable method, so the two slice methods are still enabled and in the immutable buffer flavor because they are query only. All of the mutation methods, all the transfer methods and the resize methods all throw. And of course, the immutable accessor for this flavor says true and byteLength and maxByteLength are the same. + +MM: Now, we’ve also listed a bunch of non-normative issues, or non-normative for Stage 2.7. These are issues we want to put on the table and start pursuing now. One of them is applicability to WebGPU buffer mapping. We got feedback from WebGPU folks, and the answer is no, because of the nature of immutable ArrayBuffers, they do not apply to what the WebGPU folk need, but the limited ArrayBuffers, by Jack Works’ proposal, related that proposal, that I believe is scheduled after this proposal– + +USA: Yep. + +MM: —is a good time to get into more discussion of that. Mentioned proposed integration with structured cloning, we did a lot better than that. RGN wrote a proposed mod to the HTML spec, specifically the structured cloning part of the spec, and asked specifically how to explain how structured clone would deal with immutable ArrayBuffers. And of course, there were some details there, but the overall result is exactly what you would expect. + +MM: Zero copy operations on the web, this was a mixed bag. There’s a lot of issues that this breaks down to, a lot of subissues, which we will then get into in the next two slides. + +MM: And then “update shim according to issue resolutions”. So I wrote the previous shim, and I updated the shim to track what the proposal is, and we not only have an implementation of the shim, but we have a bunch of codes that makes use of the shim for useful purposes, and it gave us some lesson on what this is like to use, which, you know, the punch line being it’s pleasant to use. + +MM: All right, zero copy operations on the web. So I am not going to go through these one at a time. I’m putting the slide up right now mostly to give you guys a chance to scan your eyes over them and to notice things you want to ask about when we break for QA. However, I’ll mention a few particular things. + +MM: On issue 300, we got a response just -- that says one hour ago, just in time—“overall I’m supportive on this, however, I’ve got a bunch of open questions about whether it can be made compatible” with what they’re already doing, which is somewhat entrenched. I don’t think that this points out any obvious incompatibilities, just uncertainty about whether it could be compatible. So I’ll take this as overall guardedly positive. + +MM: And on Wasm issue 1162, it says “we discussed immutable ArrayBuffer at the CG meeting last week. No blocking concerns, and the proposal is orthogonal to Wasm linear memory for now”. So no particular connection or help to Wasm, but no interference either. So no blocking concerns. And the proposal as it stands doesn’t preclude read-only memory to Wasm. + +MM: The second page of the prior proposals that list the zero copy concerns on the web and related issues. So the web transport issue 131, I initially got a strongly negative from Domenic that was supported by, I forget who else, but two people came out with a strong "unlikely that it would apply” because the web transport issue is between address spaces. And strategy right now is to copy buffers when communicating buffers between address spaces. And for small and medium size buffers, that makes perfect sense. However, when transmitting huge immutable ArrayBuffers, we should keep in mind the possibility—implementer’s choice since it makes no observable difference—the possibility of transmitting them by memory mapping rather than copy language. For a huge enough ArrayBuffer, that takes a long time rather than linear time, so let’s say something in multiple gigs. + +MM: And it just so happens that over lunch, I was discussing with SFC the CLDR table, which is a great example of a big data table, and there will be many big data tables that are of interest to many programs, many written in JavaScript. And these CLDR tables can be multiple gigabytes. So once a data table gets into the gigabytes, transmitting that by mapping, despite all of the weird operating system shenanigans, mapping is clearly more efficient than copies. Whether it’s worth the complexity inside the implementation is another matter that I will let implementers worry about. + +MM: And in light of the possibility of taking big, big data tables and sharing it zero copy, a follow-on proposal, which I purposely did not include in this proposal, a follow-on proposal that I want to trail behind this proposal, is to add a new import type, which let’s call it binary, that if you import a file binary, the result of the import is an immutable ArrayBuffer. So that would be the one case where you can end up with an immutable ArrayBuffer other than by populating a mutable ArrayBuffer and then doing a transfer to immutable. This would directly be born as an immutable ArrayBuffer. So it’s basically a binary asset to be loaded by a program. And I think this is in the world of multiple import types, I think this is very natural. + +MM: I awkwardly, because I don’t totally understand the tool support for looking at web standards, I did some screenshots of the diffs of RGN’s modifications to the structured clone algorithm, and are showing the excerpts that are have to do with immutable ArrayBuffer, so this is obviously just adding a case for the immutable ArrayBuffer to this branch of cases here. This over here is also -- yeah, the same, that you can share ArrayBuffers or immutable, there’s already a carved out sharing ArrayBuffers without detaching, so immutable ArrayBuffers can also be shared without detaching. Obviously sharing ArrayBuffers are already detachable and immutable buffers are already non-detachable. And then finally including immutable ArrayBuffer in the your explicitly enumerated taxonomy is kind of array, so if you have questions about the HTML language here; hopefully RGN is on the line and can answer those. + +MM: Okay. Implementer feedback, we would like more of this, but my understanding is that we don’t need more of it to cross the 2.7 threshold. We have a full excess native implementation of the entire spec. It looks good, and it does not suggest any changes. We have our own shim implementation at Agoric together with practical uses of it, and Agoric uses both Node and V8 for running some of our code, and uses XS for other code. So on XS, our plan of course is to use their native implementation, and we wrote our usage code so that it would work with both. + +MM: The shim, you know, got updated to follow the changes we made to the spec, but the shim has this crucial line that it falls short of the proposal in the following ways. Basically the key thing is that there’s no practical way for a shim to emulate efficiently freezable TypedArrays, because much of our motivation going into this is there’s no way to create a freezable TypedArray in the language as it is now, so, therefore, there’s no way to shim it. + +MM: Okay. Approval steps. Thank you to JHD and KG and SYG for the approvals, for MF, I just talked to him verbally in the hallway. He says he defers to KG and SYG. And we got an email from WH that says “looks good to me, with just one comment: why does sliceToImmutable diverge from slice when _end_ < _start_?” So I think I agree with WH’s opinion here. But this is in the domain of the purposely left open issue of order of operations and which things throw what. So if WH wants this change made before approval, I’m perfectly happy to do that before this meeting is over. But in any case, that’s where we stand now. + +MM: Just as a reminder, this is the checklist of what’s still left for Stage 3 and for Stage 4. + +MM: So that is the presentation, and now I will take questions, and let us all stop recording. + +JSL: Just kind of to get my head around the mental model, is the expectation that the +`immutable` property would extend out to the host, and ArrayBuffer is passed down the native code, like V8, would that be -- could it be immutable as well? + +MM: So the immutability represents a two-way guarantee. It’s a guarantee that the JavaScript code cannot modify it and it’s a guarantee that the data contents are stable, so because of that guarantee there’s need to worry about changes from under it. So how an implementation implements that guarantees so that holds across all the participants in the same zero copy ArrayBuffer is up to the implementer, but if the implementer fails to implement that guarantee, then that implementation would not conform to the spec as we’ve written it. And that’s on purpose. And the reason I’m going into this in some depth is there a lot of discussion on the issues, which I recommend looking at, about use of mprotect (?), use of memory manager protection making the pages actually read-only with nobody having a read-write copy of it. And that’s optional as long as the guarantee is adequately upheld by the implementation, but it’s certainly a belt and suspenders approach, and probably only to be taken on huge ArrayBuffers, where we can afford an immutable intervention. We wouldn’t do it on a 4K ArrayBuffer. + +KG: I think this is probably worth calling out in the spec, though, just as an editorial note. You can technically read it off, the fact that hosts aren’t allowed the change immutable buffers, and you can technically read this off of the essential invariants for internal object methods when something is defined as non-configurable, non-writable, then one of the variants that is required from everything is that it can’t actually change value, but this is a kind of hard thing to infer, and very few people are aware of the invariants. + +MM: We would be overjoyed to make this more explicit. + +KG: To just a note, somewhat. + +MM: So, yes, let me just ask, since we are asking for 2.7. + +KG: That can happen later. + +MM: Okay, great. But we would be overjoyed to be more explicit about that. Since it can be inferred, I have a procedural question for you. Does being more explicit about something that’s already implied from the spec language, do we have to tag that as a non-normative note or can we make it normative? + +KG: I don’t think that it makes sense to talk about notes being normative. + +MM: Okay, we -- can we state it normatively? + +KG: You can state it normatively if you want, but I would probably just put a note that calls attention to the fact that it is already normatively required because of other things. + +MM: Okay. Assertions in the spec are normative. + +KG: No. Assertions are strictly editorial. They describe properties which already hold. That, in fact, if you click on assert, it takes you to the definition, and the definition of assert says this is describing a property which already holds. If this property does not hold, it’s an editorial error in the specification. It is something that is necessarily true because of other properties or other guarantees that are normatively spelled out. + +MM: Okay, I believe you that that’s what the current language says. I’ll just say that I’m shocked because I was in discussion, and that’s not what I thought the conclusion was. + +KG: What discussion? + +MM: We thought that both things that had to be in agreement were both stated normatively, such that if they disagreed, then the spec was in an inconsistent state and one could not make a normative derivation from the spec until it was fixed. + +KG: Yes, when an assert does not hold, that means that the spec is incoherent, but it’s not because the assert is a normative requirement. It’s because the assert is said to be describing a property which holds, and if in fact that property does not hold, then, like, by definition the spec is incoherent. And usually you need to fix that by making a normative adjustment. Sometimes you can fix that by changing the property which is asserted, but if the assert doesn’t hold, the spec is incoherent. + +MM: I would love to keep talking about this, because it’s not quite what I understood, but let’s not take up our time in this. + +CDA: Yeah, just noting we have technically only a couple minutes left. We can go to the top of the hour, but there are three other items on the queue. SFC? + +SFC: Yeah, just to be brief, I love that your slides, MM, went over all the resolved issues and previous proposals, how you gave a mention to the CLDR case, that you’ll hopefully work towards as you move forward. And I really appreciate how you moved thoroughly over all the issues in the milestones, and I feel confident in the quality of the proposal. + +MM: Great. Thank you. + +WH: So I just wanted to talk about my comment about `slice`. If we were designing `slice` from scratch, I would agree that throwing on _end_ < _start_ would be sensible, but we already have lots of instances of `slice` in the language, and I think it would be better to stay consistent with them. This should be resolved by Stage 2.7, because this has nothing to do with implementation experience. + +MM: Okay. So let me first of all just ask all champions in earshot, which I think is all of them. Are all of us agreed to make the change that WH suggests? He has talked me into it. It is better to be consistent with the mistake than to fix the mistake in one place and not the other. + +RGN: I’m convinced. + +MM: Okay. Great. RGN, since you’re the steward of the actual spec language, could you do that before this TC39 meeting adjourns? + +RGN: Certainly. + +MM: Great. Thank you. + +WH: If you commit to fixing this, you can mark me as approved. I’m not going to be here on the last day of the meeting. + +MM: Okay, thank you. I will mark you as approved and be sure to make that change. (Note: both done) + +CDA: We have less than 3 minutes left and still we have four items on the queue. OMT? + +OMT: Yeah, I just wanted to say I haven’t read the whole spec text, but I like that the would be useful to my implementation. + +MM: Great, thank you. + +CDA: I sorry, I didn’t notice that was an end of message, but thank you for the message. + +SYG: I said this in my review and I would like to review this for the stream, I consider this a Stage 3 blocker in that I do not want to advance to Stage 3 until that PR is reviewed and merged. It is fine to merge things that take dependencies on not yet standardized JS features, that has happened in the past in HTML, so that is not an issue. I don’t think there’s much reason for concern there, but I would just like to point that out, that I would like to add that extra constraint for moving from 2.7 to 3 in addition to the test262 tests. + +MM: Yes, understand. The HTML spec being approved on the HTML side is a blocker for Stage 3. I don’t know if you want to call it normative, but it’s a blocker in any case. I do have a question for you. Have you looked at the structured clone spec text that RGN wrote, and do you have any concerns with them specifically? + +SYG: Only at a glance, and they seem fine to me. But, you know, getting it reviewed and merged into the HTML spec also involves I think in the issue, in the template that RGN made there, a bunch of checked boxes so, yeah, it’s good to get them checked. + +MM: Great, thank you. + +CDA: Ashley just has a reminder that the slide link is missing on the agenda. So if you have a chance -- + +MM: I’ll fix that before the TC39 meeting is over. (Note: Was fixed a few days after) + +CDA: Great, thank you. And then last one is KG. + +KG: Yeah, sorry, this -- I want to walk back the claim I made previously about it already being fully implied that hosts couldn’t modify immutable ArrayBuffers. Technically that only applies when someone actually observes one of the values. So in principle, a sufficiently strict reading could allow mutability. + +MM: Wow. + +KG: Like, between the time that you create it and the time that you observe it. So I think it sounds like we can just have consensus that the intention is that it be immutable, and we can state normatively it is immutable. And that doesn’t need to withhold 2.7, because it’s fairly straightforward to state. But, yeah,… (Note: fixed in spec) + +MM: Great. Thank you. + +CDA: And we are. JLS has a message, just noting the web crypto might need to be updated to account for immutable ArrayBuffers as well. Eg, `crypto.randomArrayValues` prens (?) TypedArray. Not Stage 3 blocking, I would think. + +MM: Thank you. Do I have Stage 2.7, first asking for—we have plenty of affirmation on the QA there. Does anybody object to Stage 2.7? I think I have Stage 2.7. Thank you. + +CDA: Okay. I guess that’s a +1 from DE on the queue. + +### Speaker's Summary of Key Points + +* All prior normative issues dealt with, except order-of-ops, to be driven by implementor feedback. +* Lots of feedback from html side, mostly positive, no blockers + +### Conclusion + +* Got all approvals needed +* Got stage 2.7 +* Much still needed on html side to get stage 3 + +## Limited ArrayBuffer + +Presenter: Jack Works (JWK) + +* [proposal](https://github.com/tc39/proposal-limited-arraybuffer) +* [slides](​​https://docs.google.com/presentation/d/1u6JsSeInvm6F4OrmCSLubtDvFVdjw1ESeE5-c_YflHE/) + +JWK: I am going to talk about the limited ArrayBuffer proposal. Here is the timeline of some other proposals. The oldest one is the read-only collection by MM, and it’s still Stage 1. Two years later, I proposed the limited ArrayBuffer proposal, that is the original version, which will be talked about later. I referred to this proposal (when designing the API), and at the same time, the resizable ArrayBuffer came in, and it went very quickly. Another proposal, `ArrayBuffer.transfer()`, was split out from the resizable ArrayBuffer. Then in December, MM proposed the immutable ArrayBuffer again. Therefore part of the motivations are replaced by the immutable ArrayBuffer proposal. And the original design of the limited ArrayBuffer proposal is, like, trying to freeze things in place. But immutable ArrayBuffer and transfer brought us a new API design style, transferToImmutable. + +JWK: Here is the original motivation of the limited ArrayBuffer proposal. The first one is, that we cannot make an ArrayBuffer read-only, which means the underlying bits can always be changed. The second one is, that you cannot give others a read-only view to the ArrayBuffer, whether the underlying ArrayBuffer is writable or not, and keep the read-write view internally. And the third one is, you cannot give others a slice of your ArrayBuffer that the holder of that view cannot expand to the whole ArrayBuffer. Let’s say, for example, there is part of memory in WebAssembly, and you want to give a slice of the program memory to the other parties, so they can change it. But you only want them to change the memory in the given slice, which is not possible today. + +JWK: Since we have an immutable ArrayBuffer today, part of the motivation is replaced. The first one is replaced by transferToImmutable. And for the other two usages, there are some potential use cases, and let me introduce them. The first one, “give others a read-only view while keeping the read-write view internally”, is the WebGPU case. In this case, they need to expose some device memory and they do not want JS programmers to change it. Meanwhile, the memory itself might be changed by some host code. Therefore we cannot expose them as immutable because the contents will change. + +JWK: In this case, I think this is very suitable for the limited ArrayBuffer proposal because we can assume there is a read-write ArrayBuffer but never exposed as a read-write view. There is no way in the JS world that a JS programmer can modify the ArrayBuffer, but the ArrayBuffer itself is not immutable. The mutable handle is kept by the host, in this case WebGPU and they can only receive a read-only view of it. The benefit of this is, that WebGPU does not need to introduce a new kind of exotic view of ArrayBuffer that cannot be created in the user-land. + +JWK: Another use case is a limited range. I just mentioned before that in some cases, you might want to share a slice of memory, but not all of them, to another party. I wonder if these two use cases still sound compelling, and if it is, I will update the motivation to remove the first one, since it’s already taken place by the immutable ArrayBuffer proposal, and continue investigating the other two. And if both of these use cases are not compelling, I may want to withdraw this proposal. + +KG: I think this is still very valuable, especially the read-only view. So like read-only buffers, there’s several web specs that have expressed interest in this. I think it’s still worth doing. + +JWK: Thank you. + +PFC: So just to check my understanding of what a limited ArrayBuffer is for, is it correct to say it’s a read-only buffer that is mutable by other code – + +JWK: This is the third one, the limited range. Wait sorry this is the second one. Yes. The limited write. + +PFC: We get the same thing in the third case, though. Right? + +JWK: In the third case, you can give others a read-only or read-write slice. Those two features are unrelated to each other. There are two things that are limited—we try to limit in this proposal the first one is the read or writability, and the second one is the range. And you can limit write or you can limit range, or you can limit both. + +JSL [via queue]: definite + 1 to keeping in proposal. Very valuable. + +MM [via queue]: still looks quite useful + 1 to keeping it at Stage 1. + +JRL: So in dealing with TextDecoder and WebStreams and other APIs that receive typed arrays. Any time you hand off a TypedArray to another piece of code, if that code doesn’t track the byte offset and length, it’s going to read the full TypedArray. It is very common in user code to just call `decode`(buffer)`, and now it decoded everything. + +JWK: Yes + +JRL: I have had that happen many times where I pass something to a library that takes a TypedArray, but it does not respect the bounds I placed on it. Having a limited view window where it cannot access anything outside of the window I gave it would be so as much cleaner for a lot of APIs. + +JWK: Yes. I have also hit by this problem. + +SYG: So just a word of warning. The implementation cost could be high. I am not sure how you would like to expose the capability to have a limited ArrayBuffer that aliases another ArrayBuffer. There are several layers of implementation for ArrayBuffer / ArrayBufferView. It sounds like you have to do that to other ArrayBuffers. And I think it’s too early for me to really give any criticism of that. That might be the best design here. But that kind of buffer management in engines is kind of scary. And the cost here could be high. We should be mindful as we are designing for the use case. + +JWK: Yes. I will try to make it simpler for the implementation as possible. Like, in the old version, it says freeze in place, which might be very complicated, but now it’s changed to something like transfer. So it will be much more—at least they’d share the same as transfer. + +SYG: But transfer detaches the source. How do you provide a smaller view that is aliased to the same buffer without detaching the source? + +JWK: I have an API in mind, that might look like this + +```javascript +view = new Uint8Array(buffer, { readonly: true}) +view.buffer // undefined +``` + +SYG: I see + +JWK: So you cannot retrieve the whole ArrayBuffer from it to re-construct a read-write view. + +SYG: I see. It doesn’t sound so bad. + +MM: Thank you. + +CDA: Michael? + +MLS: I want to reiterate a little bit of what SYG is saying. If you are going to use always like it’s likely easier for an implementation to share on OS page boundaries, beginning and ending would likely require doing some range checking for any access. So it could be more costly. This actually hold as little bit to what just Mark presented in his proposal. + +JWK: Does that mean, if we tried to align things (e.g. align by 4k), they can be easier to share? + +MLS: (???) on page boundaries, 4K or 16K, something like that. And the underlying OS calls also do things on the same kind of boundaries. + +JWK: Thank you. I am not quite sure about the machinery of this. + +JWK: It looks like many delegates expressed that we should stay in Stage 1 and continue to express the solution. I guess my topic is done. Thank you. + +CDA: Thank you. The proposal remains at Stage 1. + +### Speaker's Summary of Key Points + +* Original use cases: freeze arraybuffer, limit write (of view), limit range (of view) +* Now: Remove the first one. Limit write: use case by WebGPU, limit range: use case by WebAsm + +### Conclusion + +* Many of delegates expressed support, so not withdrawing. +* Shu expressed concerns about impl complexity. +* MLS expressed concerns about impl complexity of limiting range. + +Stay in stage 1. Continue exploring. + +## `Number.isSafeNumeric` + +Presenter: ZiJian Liu (LIU) + +* [proposal](https://github.com/Lxxyx/proposal-number-is-safe-numeric) +* [slides](https://docs.google.com/presentation/d/1Noxi5L0jnikYce1h7X67FnjMUbkBQAcMDNafkM7bF4A/edit?usp=sharing) + +LIU: Okay. Hello, everyone. I am LIU from AliBaba, and this is my first proposal at TC39. The proposal is going to add a new method `Number.isSafeNumeric`. The method is going to test JavaScript strings converted to JavaScript numbers. At first, not validation part. In web development, validating strings that can be safely converted to JavaScript numbers is a common requirement. Here I am going to list a use case. + +LIU: The first use case is API data. For our use case, we need to handle it with normal string and need to process with some values just like null, undefined, empty string. And our backend system is used JavaScript. We need to process with Java.Long for the overflow problem. And the second use case is form input validation. We need to handle with falsy values, white pace and unexpected characters. And the third is financial calculations. When we try to convert a string to number we face a new problem, the mathematical changes in a string to number conversion. And the last is data processing. We always need to write some complex validation logic for validating strings. So I think validating strings directly impacts the stability, data accuracy and user experience of web apps. But current solutions have significant limitations. Let’s look into the problem. + +LIU: The first problem is inconsistent built-in methods. I choose method number, parseInt and parseFloat. Just look into the table. When we input an empty string or string just containing whitespace, the number method will output zero. the parseInt and parseFloat will get another number. It’s inconsistent behavior. And about leading decimal point handling or scientific notation, both have some differences. So I think of the first problem. Inconsistent behavior of built-in methods. This will increase implementer overhead because the variable always needs to remember which method should be used and need to handle with each case manually. + +LIU: The second problem is hidden value change of built-in method. Here I am giving two examples. The first is big numbers, which can be bigger than integer. You can see when we transform string to number, the mathematical value changes due to double format. The second example is floating point numbers with 19 significant digits. The mathematical value change and never can be converted back. So we think, this is another problem. The hidden mathematical value changes and the user doesn’t get any running notification. So the web developer will try to use this value. They will get the wrong result. This will increase the web developer (frustration?) + +LIU: And the problem exists when you want to write down custom validation function. Here I am going to find the question from StackOverflow. How can I check if a string is a valid number? I use the top-rated answer and try to check with the numeric string. You can see the smallest number, still have the same problem. Math [KA*L] variable string, convert string to number. So I am trying to look at NPM libraries. I choose validator and is-number. Both have a large number of downloads of the mathematical number values converting string to number. This is because I think they only check that the numeric string satisfies the decimal format. But they are not looking at the value safety problem. This is a bad experience because we may have wrong value or some data consistency issues. Like backend, numeric string. When you convert to number or convert it back, the value changes. So this is a mismatch. + +LIU: So in here, I like to provide a new solution called `Number.isSafeNumeric`. It has benefits. Ensure input is a valid numeric string, reducing unexpected behaviors during parsing and subsequent operations. The second is avoiding the string’s mathematical value changing during string-number conversions. Developers may not be aware of this. But I think we can avoid this problem. The third is reducing developer mental overhead. Developers may not handle the case manually. We just want to provide a simple and reliable way. + +LIU: The key of the method is safety definition. In here we the definition to pass, one for we want to—this the (???) the string by default, which means, it should only contain ASCII digits with optional single leading minus sign. It must have digits for both sides. And with no leading zeros, except for the decimal numbers smaller than 1. No whitespace or other characters allowed. You can see the examples. + +LIU: The second part is, value safety. I think the most important of this is, the mathematical value of the string must be within the range of MAX_SAFE_INTEGER. And the mathematical value represented by the string must remain unchanged through the string ToNumber and toString conversation process. Just like the code shown below. Mathematical value of the string. This means the mathematical value is preserved, and we avoid some problems of mathematical value change. + +LIU: And after we create a list proposal, we receive many questions. So I created FAQ part. The first question is, why use strict number format rules by default and not support other formats? First, we think about validating decimal strings we focus on fundamental, in JavaScript programming is widely you would. And we can ensure consistent parsing across different systems. Like 1e5 is 10,000 in JavaScript but may be treated as a string in other systems. So this may produce some unexpected behaviors. And reduce complexity in data processing and validation. + +LIU: And first, we also consider adding a second parameter to support more formats and the parsing options. Firstly, we can support scientific notation with the format option. We can yield more decimal. The default option. So it only accepts decimal formality. Second, we can use format number, aligns with decimal and scientific notations supported. And it also can support more flexible parsing with a loose option which supports with leading sign—leading parse sign, leading decimal point, with whitespace. Less behavior is aligned with JavaScript numbers. Because when we talk about many people, we found there are already many systems or many older code, already accept some non-standard decimals. But they accept by JavaScript number, it’s supported with more options, solve less problems in the future + +LIU: And another question; how to handle subsequent numeric calculations? I think this proposal is focussed on ensuring numeric string representation is safe to be converted to a JavaScript number. So for high-precision, decimal calculations, you can refer to decimal libraries like decimal.js or the upcoming proposal decimal. How that does that relate to decimal? The decimal proposal creates a new type of process calculations. But this proposal just checks the string can be safely converted to a JavaScript number. Question? + +WH: Having read through this proposal, I have strong concerns with this breaking interoperability. This creates the problem of converting a Number to a string that’s parseable by isSafeNumeric. And the way this thing is defined now, that’s impossible. It’s impossible to take a number and convert it to a string for which isSafeNumeric will return true. Without that, you have no interoperability and I am not sure what you have accomplished. Also, there are other issues in here, such as the mathematical value restrictions that make it so 0.1 will fail, since the mathematical value of 0.1 is different from converting the 0.1 to a Number. Other things fail which shouldn’t. I don’t understand the MAX_SAFE_INTEGER condition. It has nothing to do with whether the conversion is exact or not. + +WH: So I would like to define some principles for this. One principle is that there must be some simple way for a developer who has a Number to be able to print it in such a way that isSafeNumeric is true on that string and parsing it will return the same Number. + +LIU: Let me look into the question. Yes. I think the safe numeric is determined to—I have to consider this problem. I think numeric string considered to be safe is—let me check the proposal. Is satisfied with string remain unchanged through the string number string conversion process. When a JavaScript strings convert to a JavaScript number, there may be something stored in the JavaScript number system, may change it. + +WH: An example, 0.1 will fail this. + +LIU: 0.1 may be stored in JavaScript, but I think it can be converted back when converting to the string. + +WH: Okay. Sorry, I see. You are doing ToString of a ToNumber. But ToString is not unique so there are plenty of numbers for which this will fail. + +MF: Yeah. I support everything WH said. As well, I think this proposal is pretty confused, and not very well motivated. It was claimed the mathematical value changes, and, you know, what that means is, I think, a string representation of a number would be given that is not the exemplar of the range of reals that is represented by that particular float. But, like, that doesn’t mean it’s a worse number in any way. That’s how floats work. You are referring to a range of numbers. I don’t think this actually is practically a useful thing we are talking about. I don’t like the allusion about `Number.MAX_SAFE_INTEGER`. We are not talking about all of the integral floats below it as safe, we are saying that is the upper bound of where you can do a +1. And that’s a single point, rather than talking about all of them. I don’t think it’s really well motivated, and I am not convinced by what I am seeing. + +LIU: I mean, if maximum—with one—not one doing the mathematical value of a numeric string representation, when converted ToNumber and the change and the number can be converted back, so I think this is a real use case. + +SFC: Yeah. Thank you for the presentation. I will be honest, when I saw this on the agenda, and when I saw the initial repository, I was, like, … I am not sure this is well motivated. But actually, your slides and the evidence you presented showing how, like, users frequently do this operation wrong, and how, like, highly voted answers on StackOverflow are also wrong, makes the motivation to me seem more compelling. So thank you for the presentation. I appreciate that. + +SFC: I agree with what is on the slide right here. Like, does the number of the round trip through the string, is this the correct invariant? This is an invariant that I think 90% of the people in this room understand. But, like, the average JavaScript developer doesn’t. + +SFC: The thing about max safe integer is not necessary. That seems like a discussion that can be had, you know, in Stage 2 or something like that. + +SFC: I had one possible suggestion, which is, like, having the function return a boolean seems awkward—I was wondering if, like, we can have a function like parseSafe. There’s already parseFloat. There can be parseSafe which does this and returns a number and throwing an exception if it’s not safe. + +SFC: Generally, the motivation about, well, you have user inputted numbers and things and number that comes from a bunch of weird sources and I want to make sure that you are not losing data, you are not losing people’s financial data. It seems like there is something there. Thanks for the presentation. + +LIU: Thank you. I think for the API names, we have considered many options. And I think because user already receive some just like some weird string, they want to identify if the string can be safely passed. So I use isSafeNumeric. Thanks for your response. + +OMT: Yeah. I was going to say, I agree with Shane. But I think it would be nicer to use it as it returns the value if it’s valid. But I would say, instead of returning a bool, you just do not-as-NaN (?) parseSafe. Like the parseFloat functions. + +JDH: So this is I guess a different question that probably touches on the same thing that SFC and OMT just talked about. I was asking, what are the use cases where you need to know if it’s safe? But you aren’t trying to transform it to a number. If there are some, I would love to know about them. If there are not, it’s a parse method that would be more appropriate. But I just—also, I think I’ve put the queue item on, I think this slide, the way I normally do this when starting with a string, I convert to a number and back to a string and see if it’s the same string. If it is, I am good. If it’s not, then I do my error handling, whatever that means. And that doesn’t strike me as something that is difficult to get right if you are starting out with a string. And any number that is so large that is using exponential will be revealed by this process and so on. So I guess, the first question was what are the use cases? The second question is, why is, like, take out the mathematical value part and why is that === expression not sufficient? + +LIU: Most use cases, do you mean we just want to determine the string can be represented in, say, because we when we use string, we may use it in subsequent operations, just like some calculation or any other things. If a string variable means unsafe, we think we should not handle it. Or just use some high precision library for whatever your use case. + +JDH: Okay. So I guess that makes sense. Are you trying to avoid the costs of the ToNumber? Because that check would still give you that—if the value wasn’t the same, then you know you need to do something different. I don’t know if that ToNumber is costly. It doesn’t seem like it would be, but… I don’t know. + +SYG: So I want to +1 what WH and MF are saying. I’m also confused by the motivation around mathematical value. Maybe that could be cleared up if the actual property you want is around string. I have concerns about—I don’t know how to build intuition about this set of rules—is this the right set of rules. You have pointed to, there’s some user validation use case. But then you made some opinionated choices like, you can’t skip the initial zero. It has to be `0.1` and not just `.1`. Why is that the right choice to make in the thing for user validation? If I want to accept `.1`, because I want my users to type `.1`, I am out of luck. Why does this meet the bar for standardization? + +LIU: Yes. You know, when we try to use three rules, because we think it’s used by users, so it’s easier to understand and it’s the standard for decimals. With some trailing zeros or other format, I think it can be converted to use a second format, this is one choice, by default or just what JavaScript number does. Maybe loose by default. Here we chose strict by default, because we think that is what the developers want. + +SYG: I think I need more than an assertion that this is what developers want. + +LIU: I think we need more time to investigate our list. Because by default, when you look at the string, you can think this is right. And trailing zeros is not forbidden. So I think trailing zeros is rarely accepted by the rules. + +PFC: Thanks for the presentation. I thought it was very clear. And I would support this proposal going to Stage 1 for exploring the problem space. I do want to say that I am skeptical about this particular definition of numeric safety, especially if you go to slide 10, the one you were on a moment ago, I am skeptical about why 1234.5678 is safe, the bottom one on the left is safe, and 0.123456… on the bottom right is not safe. Because when I think about parsing a string, building a mathematical value different from the number and the string, that’s—that’s the case for both of those. And so I would think both need to be on the same side of either valid or invalid, or we need to define it in some way that doesn’t reference mathematical values. So yeah. I think it would be crucial before Stage 2 to sort out which semantics we want exactly. And would like to see insight into what use cases people want for this. So, like, if you want to dine like here on the slide, 1234.5678 is safe, what are people using that number for? If they determine that that is safe? Even though the mathematical value of the 64-bit floating point is not equal to that string. So yeah. Before Stage 2, I would be interested in seeing more of what this is used for. + +LIU: Yeah. Thank you. I think we need more feedback about the list values safety definition. Before we submitted the list for this proposal, we just accepted some questions. 1234.5678 should be a safe value. Because when we consider the ways of rare number value, the number start in JavaScript changes. So it may be unsafe, but in developer mind, I just input a normal floating numbers and we convert it back, the list should be used. So I think although maybe we have some precision lost, but when you convert it back, I think it should be safe. But the list values safety definition is still the current solution. So I think we need to find a more appropriate solution for this. + +SFC: Yeah. Just to add on to the bottom two rows there, I think the invariant that is intended here, especially given that it’s about the string, you know, a particular instanceof a float64 represents, you know, an infinite amount of number. But there’s exactly one of them—well, WH said, there’s not exactly one. But there’s one number that, like, is the representative of that equivalence class. And, like, on the left, 1234.5678 is the representative of the equivalence class. That represents the equivalence class. The number on the right, it’s not safe. But I agree, it’s worth writing down very, very, very specifically what we’re actually testing for. + +???: It’s the shortest value. + +???: I think that’s an interesting recommendation. + +LIU: Yes. I think JavaScript has maybe the shortest decimal formatting of numbers. So I think the list is same problem of—because we just want to convert a string to number and convert it back from point, this should be safe. But any better algorithm or any better solution, I think we need more time to investigate. + +WH: In regards to SFC’s point, it would reject `0.10` because it’s not the shortest representation of `0.1`. + +WH: I do want to emphasize the ability to round trip between numbers and safe strings is essential. So I would like to see what techniques you would have for converting a number to a safe numeric string. As this is now, it’s impossible. You can make it possible, but I do wonder about numbers with very small or large exponents. + +LIU: Actually, here is the problem we’re facing. When we try to compare with strings, the format will change just like the shortest decimal representation. 1.10 will become 1.1. 0.10 will become 0.1. It’s not equal. So we are defining it as mathematical value. But currently, there’s no way to get the mathematical value of the spec. So this means, slides added code. So I think we need to find a better way to compare with real mathematical value, but another search string representation. Yes. This maybe needs more time just to getting more feedback from how to get a real mathematical value. + +WH: I was addressing SFC’s point, which was to require shortest representation. + +SFC: Just a small note on my—one issue, if you go with the definition of being the shortest, there are cases where there’s two values with equivalent length, those are the same equivalence class, so we need to think about how we handle those. Do we take the one that is lowest, highest, or the one in the middle? If that’s what we decide to do. I have an example that I can post in an issue somewhere. + +KG: Yeah. To add on to that, not to respond. Strictly speaking the spec allows implementation a choice of what toString does, probably the same cases. Where the last digit is not necessarily defined by the spec and it has implementations can do whatever they want. Which is not something to reify here. The spec could be made not to give implementations freedom here. I don’t think that there is an actual difference in implementations. I could be wrong. But there is a suggested definition in the spec, which is I guess, whichever one rounds to even, I think. But it’s something to address before using the ToString definition. + +LIU: This is the last question. Can we promote this proposal to Stage 1? + +CDA: Just noting there was some voices of support for Stage 1 earlier. We have +1 from DE. Some folks are asking for the problem statement. + +KG: In Stage 1, we are agreeing on a problem statement that we are interested in exploring solutions for. It is not clear what the problem statement here is. Can you try to say in a sentence what is the problem we’re trying to solve? + +LIU: We’re trying to solve a problem. This is, what you use is not what you get. Because we know whether the testing string—whether testing the string is writing a JavaScript number or just converting string to a number is a requirement and big number or small number with more than 17 significant digits, the mathematical value changes. So you are trying to display something but in reality the number changes so you cannot get the clear result. I think less most important part of the spec. What you see is what you get. + +MF: I don’t agree with that statement. What you say you want is what you get. You may have, like, additional digits there that you don’t feel are represented, but trust me, the float you get is representing that number that you are writing down. That’s why I feel this proposal is still confused if that’s the problem statement. + +RBN: I wanted to point out to Michael’s comment, your comment is accurate if I am specifically converting a string literal to a number. What I see I expect to get because I wrote that. If I am doing input validation, I want to validate that the input that the user writes is what they are actually expecting to get when doing that calculation. I don’t think that’s accurate when talking about input validation which is what this primarily is a main reason to have this feature. + +KG?: Ron, could you make a problem statement then? It seems like you have a good sense what it’s for. + +RBN: I think I mentioned this, it seems to me that the goal for this is to validate that the input that you provide to the function would produce a number without any loss of precision, and if it cannot produce a number that is exactly represents what is written without loss of precision, it would return false. + +KG?: I don’t know what loss of precision means, if we are allowing `0.1` as an input. + +RBN: I can’t speak more to that unfortunately. + +SYG: I am also similarly confused. The use case I heard in passing was that, if you cannot represent a thing as a double float64, and we don’t know what exactly that means, but suppose we did—then you would, like, dynamically use the representation, a user-land library or something? I don’t understand the end to end use case. If you decide the input is exactly representable, let’s take the most charitable reading we know, which is like it round trips to an exemplar string or something. Ignoring very small exponents(?). And you store it and represent it in your runtime as a number, as a float64, you are still opting into the world of floating point arithmetic later. Right? You’re storing the number to do stuff with it. Like, we can’t really—it seems weird that you would just try to verify at the input. There’s no way for us to guarantee that, like, you never lose precision, depending on what you do it. I am also confused on the use cases. + +LIU: I think due to floating numbers being stored in double precise, the precision loose would happen when significant digits can not be stored. But also, with short, the decimal format, the JavaScript member will round to the correct value. So if the input value and the toString value is equal, I think they can be stored as safe. So maybe still precise loss happens. But this is maybe what developers want. + +CDA: We have a couple of minutes left. For this topic and tore the day. MLS, did you want to chime in here? + +MLS: Not only about the problem areas, I would also like to know the use cases. + +WH: My position is similar to MF’s. I don’t understand the problem statement here. + +CDA: So very little time left. So you do have support for Stage 1 from folks who feel like they understand, or have some idea of what that problem statement is. Noting that we don’t have a formal problem statement, like, stated succinctly. So given that, for the folks who would like to better understand the problem statement, are those concerns blocking at this point? + +WH: The entrance criteria for Stage 1 include having a problem statement we agree on, and we don’t seem to agree on one. + +SYG?: Sometimes we reject a proposal for Stage 1 because we have, like, understood what is being discussed and said that actually we don’t want to add that into the language. And that’s not what is happening here. We are not, like, rejecting the proposal. But I am not comfortable going to Stage 1 with a proposal that I don’t understand what it’s trying to do. Since that is the point of Stage 1. If it was just me, I would be happy to do it off-line. But it sounds like there are other people who don’t understand what we are trying to do. No, I don’t think it should go to Stage 1 at this time. + +CDA: Okay. So the ask here, LIU, if you could, please, not right now in real time, but today, tomorrow, try and develop that succinct problem statement that the committee could consider, and then we can come back and ask for Stage 1 based on what that problem statement is. + +>> That can happen at this meeting if we have extra time. + +JHD: I have a queue item to request for that. Once you come up with the problem statement, could you file an issue on the proposal repo and drop in matrix and we can review it before we leave this week? + +LIU: Yes. I can create an issue and post link to Matrix. + +CDA: Let’s follow-up off-line and revisit later in the meeting. Ideally, tomorrow afternoon, if possible. + +### Speaker's Summary of Key Points + +* Still need consider about safety definition +* Provide more examples for this use case + +### Conclusion + +* Required to provide a 'problem statement' which succinctly describes the problem your proposal is intended to solve diff --git a/meetings/2025-02/february-19.md b/meetings/2025-02/february-19.md new file mode 100644 index 00000000..53b353bd --- /dev/null +++ b/meetings/2025-02/february-19.md @@ -0,0 +1,1329 @@ +# 106th TC39 Meeting | 19 February 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Kevin Gibbons | KG | F5 | +| Keith Miller | KM | Apple Inc | +| Chris de Almeida | CDA | IBM | +| Dmitry Makhnev | DJM | JetBrains | +| Oliver Medhurst | OMT | Invited Expert | +| Waldemar Horwat | WH | Invited Expert | +| Ujjwal Sharma | USA | Igalia | +| Andreu Botella | ABO | Igalia | +| Daniel Ehrenberg | DE | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Luis Pardo | LFP | Microsoft | +| Michael Saboff | MLS | Apple Inc | +| Linus Groh | LGH | Bloomberg | +| Erik Marks | REK | Consensys | +| Shane F Carr | SFC | Google | +| Chip Morningstar | CM | Consensys | +| Daniel Minor | DLM | Mozilla | +| Sergey Rubanov | SRV | Invited Expert | +| Justin Grant | JGT | Invited Expert | +| Ron Buckton | RBN | Microsoft | +| Nicolò Ribaudo | NRO | Igalia | +| Jesse Alama | JMN | Igalia | +| Samina Husain | SHN | Ecma | +| Istvan Sebestyen | IS | Ecma | +| Eemeli Aro | EAO | Mozilla | +| Aki Rose Braun | AKI | Ecma International | +| J. S. Choi | JSC | Invited Expert | + +## A unified vision for measure and decimal + +Presenter: Jesse Alama (JMN) and Eemeli Aro (EAO) + +* proposals: [measure](https://github.com/tc39/proposal-measure/), [decimal](https://github.com/tc39/proposal-decimal/) +* [slides](https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity) + +JMN: Good morning everyone. This is JMN. And also working with BAN on this. My colleague is working on the measure proposal. The measure side of things. Originally the intention is that we would present this together. But BAN is unfortunately on medical leave. I’m taking the reins of these temporarily. You may know me from the Decimal proposal for a long time now. The intention of this presentation is to give you an update about how we currently think about things with decimal and measure living together. This is not a stage advancement, this is just essentially a Stage 1 update. + +JMN: There was a last minute addition to give this presentation a bit more concrete detail. EAO will chip in one or two at the very end. Are you there? + +EAO: I’m here. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/1] + +JMN: Great. The decimal proposal is all about exact decimal numbers for JavaScript. The purpose of exact decimal numbers is to eliminate, or at least severely reduce, the kind of rounding errors that are frequently seen with our friends binary floats especially when handling human numeric data and especially when calculating these values. Not just representing these things and converting toStrings but making sure when we do calculations with these numbers that we get the results we expect. I know that we really love the topic of numbers. Yesterday’s discussion at the end there actually sort of overlapped a little bit with decimal as you might see here. + +JMN: So just to make things very clear, in the decimal world, we imagine that when we write 1.3, when we construct a decimal value from 1.3, those digits, that really will be 1.3 instead of an approximation thereof. To illustrate arithmetic and calculation, 0.1 and 0.2 in this world really would be 0.3. Again, it’s not 'about the same', but they really are exactly the same thing. So that’s the decimal side of things. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/2] + +JMN: The Measure proposal which was fairly new. I think the idea has been kind of talked about for a long time and sort of exists in the Intl world. But the measure proposal presented a couple plenaries ago by BAN is about the idea of tagging numbers with a unit. So think about just the kind of units that we use in everyday life, grams, liters and so on. The idea is that we can tag these numbers with the precision as well. Here is just to cut to the meat of it and think about let’s say 30 centimetres there. The idea is we could convert these measurements or measures to other units and perhaps also specify some kind of additional precision there. So think about this 30 centimeters versus 30.00 centimeters and so on. This is another thing to show additional kinds of calculations or at least operations on these kinds of measurements taking for instance—sorry for the non-imperial friends—but using feet and inches is also something that we would like to handle in this kind of proposal. So think about 5.5 feet, that’s actually 5 feet and 6 inches and construct the component of these things. So that’s the idea of Measure in a very simple form. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/3] + +JMN: What is interesting is that although the Decimal proposal is really about numbers per se and Measure is about something a bit different they have distinct needs and there are distinct use cases. But they do share an interesting overlap. That’s the purpose of this presentation today to draw our attention to this overlap, because these proposals are helping us to represent numbers the way that humans often use numbers. Usually we talk about handling numbers in some kind of human-consumable way we’re talking about base ten numbers and the kind of arithmetic and rounding involved with that. Decimal is also sort of about precision as well. And there’s some kind of units there. So these are common things that you see when we talk about numbers and human representations of numbers. And these two proposals are sitting at least there’s a part of them that overlaps in our handling of these two things or all of these things. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/4] + +JMN: We can think about how Measure can use Decimal. There’s an interesting possibility there. Because Measure needs some way of having some kind of underlying mathematical value, some kind of numeric value there. So Intl actually currently uses mathematical values to avoid some floating-point errors. Measure, for instance, could directly use decimals. So look at this code example where we take say 1.2 and construct a measurement of 1.2 grams with some kind of precision of 10^-1. Decimal objects could be also upgradeable to Measure object as well. There’s the conceptual overlap between the two. That comes up in terms of code samples like this. All of this is still very much in discussion. So what I’m proposing here is not anything that’s final. I’m just trying to get your creative juices flowing thinking about how these two proposals interact and overlap with one another. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/5] + +JMN: What’s interesting here is that there are a few different kinds of data that we could be talking about. And one of the proposals that has been presented here many times and I know has a lot of fans is the Temporal proposal. And we propose that Temporal can be a source of inspiration and learning for us. Because we know that Temporal we have a lot of different concepts that are being strictly separated from one another. So in Temporal we have things like PlainTime, PlainDate, PlainDateTime, ZonedDateTime and so on. These are separate. You might say that the API is strongly typed. If there’s any kind of conversion that needs to happen that needs to be explicit. That has a number of benefits for the developer. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/7] + +JMN: So the question for us here thinking about Measure and Decimal and the overlap between them is whether there’s some kind of unified system perhaps that can be identified sitting between these two proposals. So we have different information with different types available to the developer. That’s the challenge for us. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/8] + +JMN: So there are a couple of different topics here. Let’s be explicit what we’re talking about. We think about base-10 numbers. And two questions. We’re talking about something with a unit like gram or feet, that’s one question. And another dimension could be there’s some kind of precision there. Does the number itself tell you just by reading it off from the digits about how precise this is going to be? We have four possibilities there: + +* So, for instance, in the Decimal proposal, we talked many times about this concept of “normalized” [canonicalized] decimals, where we strip any trailing zeros. So, for instance, 1.20 just is 1.2. So that would be something in which we don’t expose precision and just has no unit, because it is just a number. +* In previous discussions about decimal, we also talked about the full IEEE-754 approach that we actually have loaded and discussed many times in plenary. It’s called the full IEEE-754. This is a representation of decimal numbers in which precision really is present on the number. So the number does contain not simply a mathematical value, but also an indication of how precise this value is. Or, in other words, possibly some trailing zeros are present there. +* We also have things like numbers with a unit but with no precision. So some kind of exact measure you might call it. So something like the speed of light would be an example of that kind of thing. +* Another kind of example that comes up in everyday life, every day numbers would be say our weight on a scale or the length of a stick, that the number that we read on the scale of a ruler would indicate some kind of precision. So it has a unit. And it also indicates some kind of degree of precision as well. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/9] + +JMN: So if we look at this thing, we might be already looking at four different classes of things, which is already starting to be quite a lot. Actually we can expand the conversation and take a couple steps back and find that there’s even more possibilities here. So think about—I mean, you don’t have to go through this entire thing. You can think about binary64 or float64, the numbers that we know and love already in JS and talk about integers. Those also have analogues in JS with BigInt. And at base-10 and the bottom row of the table has four possibilities and those are the four. And think about integers, we can think we want to have some kind of BigInt with a unit or BigInt with some kind of precision or do we want to have float64 with units precision? You can see that the topic is getting as we take a step back, we see that there are many possibilities here. And the developer might think this is interesting, but it seems like we have a kind of proliferation of possibilities here. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/10] + +JMN: So do we really need all of that? I mean, the conversation is leading us down a path which suggests that there are lots of things to think about. But maybe everything could be expressed by a single class. Maybe we could have some kind of, I don’t know, unitless number or dimensionless number which has a unit like 'one', as it's usually called, or u. For instance, 2.34 is 2.34 with unit 1. And reduce the mental complexity here. And why not, say, express exactness by talking about infinity as some kind of valid precision. If we tag a number as being infinitely valid, this data as far as I know is exactly correct. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/11] + +JMN: But there’s a bit of a challenge with trying to pack all of that into a single class. We know again learning from Temporal that having separate types which vary that it has a lot of advantages. We don’t need to manually validate which information is present or absent and possibly throw in some kind of incoherent combination of data. We have type checking possibilities. And just generally adding information can limit capabilities. So if we think about doing arithmetic with these numbers, if we have more information, that means we can do fewer operations with it or fewer operations just out of the box and no thinking of checks. If we think about just numbers as numbers, then we can add them. So if 1.23 plus 0.04 is 1.27. End of discussion. That’s fine. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/12] + +JMN: When we start to add precision, things start to get a little bit fuzzier. I wouldn’t say incoherent but start to get a little bit trickier and now we have for instance 1.23 that has three significant digits and 0.040 with the zero at the end there, two significant digits, what do we do with that? IEEE does give an answer to the question but there are many possible answers that can be given there. And then we have silly things like adding 1.23 metres and some watts that is presumably some kind of incoherent addition. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/13] + +JMN: So the question here is if we add this information to our data, we have this so it suggests to us that we need at least two classes. So I don’t want to say that we have to have all of the data that we had on the table in the previous one. But making the argument to allow you to have two classes here. The thinking at the moment is that decimal at least in the normalized form so no precision tracking is a valid thing to think about. We have arithmetic there. That’s quite well defined. Basically 'just math' for lack of a better word. It would be based on IEEE-754 limits which means that there’s a fixed bit width for the numbers. 128 bits. That’s quite a lot. You can do quite a lot in the space. It is ultimately limited. Just to be clear, we’re not talking about tracking precision here. We’re really talking about values that are supposed to be just numbers. + +JMN: And the other class that we would suggest would be necessary is kind of measure with precision backed by a decimal, you might say. There’s no arithmetic going on there, at least not in the initial version of this thing. There could be conversion there and convert from feet to metres and a static notion of precision. That’s another way of saying that the precision of a value just is the one that you supply at construction time and that’s it. There’s no intelligence there. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/14] + +JMN: And how might we be able to convert between these things? We might say explicit conversion of measure to decimal. We might have some kind of static method of converting some kind of decimal value to a measure, and we might be able to take a decimal value and tag it with some unit then. All of this is just, again, just to get intuition flowing. This is not any kind of final API. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/15] + +JMN: The discussion is pretty much ongoing. I hope that I showed you that the measure and decimal proposal overlap or intersect in an interesting way. That suggests that we might be able to make some progress on both of them simultaneously or maybe even in a staged way. I don’t mean that all of the questions are solved there. There’s some interesting open questions. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/16] + +JMN: So here is one question you might ask yourself: do we need some kind of separate classes for different kind of data underlying the measure? Do we need some kind of BigIntMeasure distinct from a DecimalMeasure and distinct from a NumberMeasure? I suppose one way to think about it. Maybe you can have some kind of measure with a `.type` property. You can say this is a BigInt or decimal for the number. I don’t know. This is very much open for discussion. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/17] + +JMN: Another interesting question is whether we need something like a decimal with precision. So I made the case earlier that if we were to proceed with decimal, then we should probably have decimal without precision tracking, but that doesn’t mean that decimal with precision is a bad idea per se. That could still exist in this universe. So we could have some kind of decimal, generate some kind of precision there or set the precision there. We would have something like a FullDecimal and that could be converted to a Measure by tagging with a unit and so on. The suggestion would be that if we were to have this kind of decimal, that it wouldn’t support arithmetic, because as we have discussed here in plenary a couple of times, the IEEE-754 rules for propagating the precision or propagating what's also called the quantum of a number is similar to unusual prop. I mean, it is an approach defined and implemented of course. There are other ways of propagating precision and to avoid any particular one, you might say the full decimal if we were to have it at all would support arithmetic because it just doesn’t want to get into those discussions. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/19] + +JMN: So for us, the next steps this, this is something I want to hear from you is how to move this forward. I made the case that measure and decimal are distinct proposals but they overlap in an interesting way. And they are inter locked in interesting ways. So I might propose that one option would be to just keep them separate, but they’re somehow designed in a tight collaboration that is not exactly well defined. I think we have a sense of what that means. Another option would be to merge the two proposals, possibly now or at a later stage if they go to the later stage. I've prepared a README to show you what it could look like. That might sound a little bit preposterous but it makes some sense of what we’re thinking about. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/20] + +JMN: Just to be a bit clear about the details, EAO volunteered to talk about another approach. One of the things that we launched by the way in matrix you can see there’s a new channel or at least fairly new channel for talking about the measure and decimal thing and the kind of harmony that currently exists between these—you’re welcome to join that if you wish. We just started a biweekly call to talk about these things and biweekly call and recent discussions, we talked about the word for what we should use for measure. Perhaps other words are more suggestive and fit better to what we’re talking about. One of the suggestions I believe coming from EAO was amount. The thinking is that an amount is also a term that could make sense for a number of plus unit and possibly precision. EAO if you would like to take over, you can go ahead. + +EAO: I would be happy to. So, yeah, in the conversation around this, my view of how should we split this whole mess of things we have got is maybe a little bit different from Jesse’s, but this is why we’re presenting it and sharing the discussion with you. So we can maybe get all of this advanced a little bit. As I’ve been looking at this, I see a lot of overlap in the use cases presented for the Measure proposal and for Decimal but also a lot of divergences that they go to all sorts of directions. And then there’s also, in the background here, in particular for Measure, there’s the smart unit preferences proposal separately from this. And then in this context, I’ve been looking at what are the actual use cases and goals and so on that we have as in this group accepted for these Stage 1 proposals so far and really coming to a conclusion that is somewhat shared, I think, with JMN, that the split we have currently of these is maybe not the best one. It’s close, but it’s maybe not the best one. And maybe we would like to refactor a little bit how these proposals, and now also possibly the `Number.isSafeNumeric` proposal as well, how all of these interrelate with each other. But when considering all of these, I think we have got three different proposals or maybe four that we ought to have. But they maybe all ultimately could work on a single thing. + +EAO: And that single thing has, I think, a possible first step that unblocks, solves a number of the use cases and goals we have set out for these proposals, not everything by any means, but it’s also possible to work from there towards different directions. That would be to have this relatively opaque class replacing Measure, called Amount. That does not finish that would not initially in the first step at least include anything about any operations that you can do with it or any conversions that you could do with it. But it would include, in addition to an opaque value that separate fields for dimension and the precision that this value could have, if you could go to the next slide, please. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/21] + +EAO: Sorry for the weird gray background. That’s an artifact I think highlighter. This is the idea of what we could have as a first step building towards being able to do unit conversions and being able to do decimal sort of math later. And the idea is here to have an opaque—an amount with an opaque value that you can really initially get out as something like a toString and then have this work as intended for measure to work with `Intl.NumberFormat` formatting. It would include it’s own `toLocaleString` and feed the Intl instance `Intl.Format` formatting call and get a sensible thing out of it. One to note by the way it specifically has unit and currency as separate fields. + +EAO: One of the biggest overlaps that we do have for the measure and the decimal proposals is both of them say that we ought to have a good solution for how to represent money and monetary values in JavaScript and my biggest concern driving towards maybe we only ought to have a single class is that, I find that it would be confusing to a developer if we tell them that if you have got money, what should you be using? Okay, no. I believe that it would be simplest to be able to tell developers that if you have monetary value or something with the possible unit attached to it, then you want to work with an amount, then there might be operations on this later as there are proposed for decimal for addition and subtraction and other operations, but there also could be the kind of conversion factors that you can have here. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/13] + +EAO: Effectively what I think is that if you go back to slide 13, please, I think when I look at the issues here, the conclusion I come to is that, first of all, if it is a positive feature, if the result we come up with for doing math with real world numbers would give an error if you try to add meters and watts together and for the significant digits thing, I don’t think there’s any issue in how the operations work if we consider the significant digits math and the actual value math as separate operations. So the one place I think I don’t quite agree with Jesse is that I don’t see that the current setup requires us to end up with at least two classes. I think we would be fine with just one. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/21] + +EAO: But furthermore, I think the initial step—now you can go back to this slide—will provide support for going in all of the directions we can imagine for the Measure proposal and for the Decimal proposal and it would also solve the use cases put forward for measure with a note by the way that as I was reviewing this, I realized that I don’t think we have actually presented really well a use case for why unit conversions ought to be on Measure. The smart unit preferences does that relatively well, but the measure proposal for that I think we have kind of just asserted that unit conversions ought to be there, but we ought to do a better job of explaining why those are important to be available in ECMA-262 rather than just 402. But that’s it for my part of this. + +JMN: Thank you EAO and that’s about all we have today. I hope that we have sparked your interest in thinking about these proposals as being part of the shared space. The question is, what to do for next steps in terms of keeping them together or not? EAO mentioned by the way there’s yet another proposal, the smart unit preferences (https://github.com/tc39/proposal-smart-unit-preferences) that is also I would say part of this overall harmony discussion as it were. And that’s it. We are happy to open for discussion. If I might suggest, we do have a TC39 chat channel for these topics and there’s a regular call. I already know that there’s kind of super fans about this topic. We kind of already chat about these things. So if there are any comments from outside of the super fan club, we would love to hear them. + +KG: Thanks for the presentation. Just a quick note that part of this feels pretty weird to have decimal measures for imperial units. You might encounter a third of a cup, you don’t encounter 0.33 cups. That perhaps suggests this is not sufficient to represent common units. We might further need a rational type which maybe is a good idea. It does add a little bit of space here. Something to think about for the future. That’s all. + +EAO: Question to KG and anyone actually not necessarily for right now, it would be really interesting to hear of an actual place where the data for something like imperial units like this would be stored as something like fractions, rather than being stored as something like decimal or number that is then converted to fractions for display purposes. + +KG: I’m not aware of any. But certainly like if I was building a recipe website, I would reach for rational representation for cups because if you quadruple a recipe that has a third of a cup, you shouldn’t get 1.334 cups as the output. That’s just weird. So if you want to actually manipulate imperial units, and preserve them in the way that humans are going to expect to encounter them, you do actually need to use a rational representation. + +EAO: I agree. This is why I’m asking if actual shown real world use case of here, we have the data in actual fractions somehow, somewhere that is now clumsy and would be better to have an actual representation, because my suspicion is that the actual data for stuff like this is still going to be in numeric decimal, but let’s go on. + +JMN: Just as a quick response, I think one source that I’m familiar with that is coming from this imperial world is cookbooks. They often use fractions to represent. But surely there are others as well. + +JHD: Often I would say always at least in the U.S. without exception. + +SFC: It's a really good question. It’s also something that’s I think we should investigate more about does it fit in the table layer in terms of is it—given the correct choices for the precision of the number, like, you would take the number and display it with fractions even if it’s represented as a decimal inside of the computer is an interesting question to investigate. I agree that Rationals would be a nice representation in this specific case. + +DE: Just agreeing with SFC here, CLDR currently lacks data and algorithms for number of formatting with fractions. This is a contribution that Unicode folks would be happy to have upstream. If we want to work on this cookbook problem, the natural place would be to start there. You kind of need it to work end to end. Until then it’s reasonable for us to start with decimals that have been prioritized for data collection because they come up in a lot of different cases. So we could consider making something like measure future proof by being generic over units. But overall the shape of what that will look like is pretty far from the present. Do you agree with that Shane? + +GCL: I think this topic about measures is well motivated enough that it could be a thing that advances separately from decimals and I think it would be useful for it to not be tied to decimals specifically because there are reasons you might want to use other numeric types any way. I wanted to put out here I think this is a pretty useful thing especially for durations and sizes of bytesthat would like, I think, would be very valuable in the language. + +JSL: I also think this is what MF has raised but however with the units defined is a clarified question and a fix set coming from bar or extensible in some way and how to get into: conversions. How the conversion ratios is fixed. + +JMN: That’s also something that is up for discussion. Initially CLDR is something that we would like to support. They also define as convergence as different thing with the data that they provide. Currency is something that is a strong motivating use case for us. Even more generally might say otherly arbitrary units is also something that could be conceivable. So pick anything you want and say what it means. You could say what it means to convert from one thing to the other or block or I don’t know. I would say this is still an evolving topic. + +EAO: Just noting that currently for `Intl.NumberFormat` with unit formatting, there’s an explicit list of supported units. This is a subset of the units in CLDR which is also the source for any transforms between these units and convertibility effectively. But this does not necessarily need to limit what goes into a Measure or an Amount that supports conversions, but I would say that I don’t think for the very initial part of Measure or Amount we’ve actually presented the use case for why that ought to be in 262 and it might make sense for that part of the whole work to be considered as a part of an evolution of the smart unit preferences proposal rather than the Measure proposal. + +SFC: This https://github.com/tc39/proposal-measure/issues/10 is one link to initiate the topic of discussion here if you have any background or thoughts of this, you’re more than welcome to chime in. Issue 10 on the measure proposal repository. + +MLS: I think this has been somewhat discussed. CLDR does have units but I don’t know whether it does conversions between imperial and metric and so on and so forth. And since I have some time here, there’s also—if you’re going to do fractions, I think you need to keep both the numerator and denominator as values to be actually aggregate converting between decimals and fractions is troublesome given loss of precision and things like that. So it seems like this is going to require some dependency on some other database and a database that may not exist in standards form. + +SFC: CLDR does have a specification and a whole table of conversion factors. And presumably those are the ones we would use, although there are other databases as well with different roles handling things like rational is part of the operation and the CLDR rules are basically retain the rational throughout the whole conversion process and then flatten it after all conversion is applied and things like this. This is the space that the CLDR people have thought about. But your input is very much welcome I think again on that same issue, issue 10. + +DE: This was a very great presentation that laid out the public space cleanly, but overall it seems like having two classes, one for measure with precision and a unit and one for decimal without precision. It seems like the cleanest. We have seen in previous presentations by Jesse clearly use cases for arithmetic. But arithmetic would be quite difficult and fought when precision and units are included even when it’s useful in some cases. We can go either way on whether measure is specific to decimal or numeric. I think this makes sense as already proposed as two proposals. Maybe we want them to go to Stage 2 together, but as long as we’re developing them both with the other in mind, I think it makes sense to keep them that way. Whether we put something in 262 or 402, that’s an interesting thing to consider. But ultimately it’s practically editorial. And shouldn’t really affect much about the way that the APIs look. So I’m happy with the diligent work done here on all three proposals. I hope we can advance them. + +WH: I agree with Daniel here. The presentation here today mostly neglected arithmetic, and arithmetic on measures would be very complicated. There are plenty of use cases where we just want decimal numbers and you want to do clean IEEE arithmetic on them. It’s hard to define how square root works on measures so you can do basic geometry. + +SYG: I guess somewhat covered and just one of the questions is what we have for the units. The physical units is front-end JS web apps. Not sure about the service side than things like CSS units or you know computer storage sizes and stuff like that. What are your thoughts on how we decide the set of units that ought to be included in the language? + +DE: I think one of the first things you want to do with CSS unit values is calculations on them, which involve mixed units and fractions. I don’t think that’s something that we can cover in scope here. Lots of front end code involves communication with people about human intelligible quantities that is decimal and unit quantities. Although CSS is an important thing to consider, I think it would be really difficult to do a good job with these. The way that the measure proposal is framed right now is in terms of arbitrary strings for the unit. So people could use it to represent CSS units. But I’m not sure if it would solve most of the problems that people want it to solve. + +KG: I mean, isn’t it also that the first thing you want to do with most measurements is calculations? + +DE: So I don’t know if that’s true the same way. To do CSS calculations, it’s relative to the window or the context where it is and calculations on like converting feet to meters is not relative to something. CSS calculations you’re doing symbolic manipulation rather than being actual calculation. + +KG: I misunderstood your point, then. + +DE: When I said calc the particular CSS operator. + +KG: A lot of CSS units are not particularly relative. Like, vh is but pixel is not. + +DE: Pixel is kind of complicated also. + +KG: Yes. It’s complicated. A lot of units are complicated. + +EAO: So I think we have like multiple overlapping discussions here spanning how do we do arithmetic in general? How do we do—what units are supported and what unit conversions are supported between them and I think this is all to me pointing out that we ought to be handling this whole stack as at least three different proposals with an initial proposal that introduces something like an measure or amount that solves the use cases that we presented for measure so far a second proposal decimal allowing for operations on the real world values and other values that we want to allow for and a third proposal that introduces possibly new units and unit conversions between them. The smart unit preferences doesn’t quite do this at the moment, because nominally all that it is doing is introducing a usage parameter for Intl NumberFormat. But its effects are what is leading to us wanting to have unit conversions happening in a different way than just as a hack that you can get out of Intl NumberFormat. But I do think we do need to refactor these proposals and consider which of these kind of, for example, get a count of all of the use cases and goals that we have for all of these proposals and then decide which sets of those use cases can be solved in one clump and a second and a third clump possibly rather than requiring us to have all of this in this one inter mingled conversation as we have had on the topic and previous cases and this one as well. + +DE: The proposals are factored. What do you see as the goals needed for refactoring? What do you see is wrong with the existing factoring? + +EAO: The unit conversion stuff. We have not actually—if you actually look at what was discussed for measure, I think the October meeting, we did not actually agree then I think that unit conversions ought to be a thing that is supported. It was just asserted there that, given these other needs, therefore unit conversions must be included. And separately from this for the smart unit preferences that was introduced some years ago, they are also—there was no discussion about whether unit conversions ought to be supported at all. So we ought to have a proposal that actually proposes that we have unit conversion support rather than us just asserting that that ought to be the case. And I’m particularly calling this out because unit conversions on top of measure bring in the question of, if you do a conversion of a Measure value to a different unit, then what is the value expressed in that result? For that, we need some answer. Without unit conversions we can say the value is opaque. With conversions we need to have some representation thereof. That brings in the possible dependency on decimal and that brings all of this into one complicated stack which is why I’m saying we ought to have three proposals here, one for an opaque measure or amount, a second one for decimal operations, and a third one for unit conversions. + +DE: Okay. That sounds consistent with the Decimal class and Measure class as proposed. Ensuring that we design units conversion as part of 402 that we do a good job of that design and make sure it aligns with the other two. Is that an accurate understanding of what you’re saying? + +EAO: Somewhat yes. There’s a strong desire that this overlaps with the next presentation I will be given on stable formatting in that the unit conversion work ought not to be only a 402 thing so that we do not make it so that JavaScript developers who will use any tool they’re given will want to use the 402 tooling for doing the unit conversions that they want to do otherwise. + +DE: Okay. I look forward to that, understanding that argument. + +SYG: Are there applications that want something like the measured class today, and if so, what are they doing today? Is the question for the champions. + +SFC: EAO can address that. + +DE: You’re funding the Measure proposal development. What made you fund it? + +SFC: I'll prepare an answer, but in the meantime, EAO can shed some light. + +EAO: So my most direct and somewhat possibly selfish interest here is that something like measure or amount unblocks a lot of the issues for Intl messageformat. It does this by kind of fixing a bug in Intl NumberFormat where right now where we do have a need to format a currency value or a unit value, we’re in this situation where we need to give the unit identifier or the currency identifier in the Intl NumberFormat constructor and then give the numerical value that we’re formatting completely separate from this in the `.format` method on the object. This is leading to the situation where it becomes particularly for localization relatively easy if you’re allowing for representation of NumberFormating options like for example message from to but not just limited to message from to to the situation where it is possibly far too easy to introduce the localization bug into an existing implementation. I presented on this somewhat at the December plenary, but I could go into more detail if desired. I would be also interested to hear more from Shane and others who are interested in this work. + +EAO: But overall, the gist of what we’re looking for is that we provide a way of representing as a single thing a unit or a currency together with the value and the identifier for what this thing is rather than requiring these things to live separate lives. + +CDA: We have about five minutes left. + +GCL: Just a reply to SYG, I think probably most JavaScript programmers on earth have dealt with durations or timers and such, and probably a significant fraction deal with things like bytes and sizes of memory. Currency also seems pretty motivating but probably has slightly different usage patterns. But I think all of these are very, very common things that existing programs use just by, you know, bringing their own conversions, and it would be useful to have that in the language. + +SYG: Isn’t duration solved by Temporal? + +NRO: Yes. Probably not have time units in measure proposal given the Temporal array does a very good job of it. + +SFC: I didn’t expect this to be iterated with the use cases with the measured proposal but I think that BAN had an excellent presentation in the Tokyo plenary in October when he set out all of the use cases and I can reiterate those, if you like. + +SYG: Just to be clear, I’m not asking for distillation of how you would like to use it. I’m asking to be pointed to an example if possible of what applications do today, in the way that Temporal was very strongly motivated and it was very motivating, I think, to replace things like—what is the library called? Moment? Like, there was very clear demand in a bunch of userland solutions to solve this hard problem; therefore, it was a good idea to do Temporal. I would like to get a better grasp of what the ecosystem does today for its uses of other kinds of measures. + +CDA: We have a couple minutes left. + +NRO: I already spoke to this by saying how we would like the proposals to be structured. We didn’t hear other opinions, but actually among the champions of the proposals, there is significant disagreement on the direction to proceed, with some people preferring a single proposal and other divisions and other splitting in some other ways. We had in the past different levels of success on merging and splitting proposals and think of class fields or modules sometimes worked well and sometimes not the best idea. I wonder if people that were that involved in the championing proposals have opinions on how they would prefer this extended champions group to proceed: whether as a single proposal or by keeping the various pieces separate. + +DLM: I would like to say that we had some significant internal discussions about these proposals, and we’re definitely skeptical about having the decimal measure proposals merged. I think that there is, as SYG was alluding to, various amounts of evidence as to the utility and demand for the different proposals in the ecosystem. And in particular, I would say I’m personally and I think I have to confirm with my team and I think we would be very skeptical about unit conversions and as EAO saying as their proposal would be a good idea. In general, yes, I think I would recommend for the champions not to merge these proposals. Thanks. + +MF: I think that the proposals are covering different enough space that they need to be individually justified and I don’t want a potentially strong part of that proposal to carry the weaker part. I want it to stand on its own. I would rather have them separate. + +CDA: Okay. We have less than a minute left. Shane, can you be brief, please. + +SFC: Yeah, I would like to have more time to discuss this. I think it’s a very important point. But a point that I—I’ve definitely one of the strongest advocates for, you know, keeping these proposals together, and I think that the reason is because the whole is greater than the parts. I think the vision of having a unified solution to how you deal with numerics in JavaScript including things like decimal values of arbitrary precision, units of measurement, and so forth is very strong as a whole. It gives a very easy on-ramp to localization of values and a very easy on-ramp to be able to represent and store and transmit units of currency and measurement similar to how Temporal allowed us to have a string format to do this and talk to each other easily. I think that separating these proposals puts these into boxes that do not deliver the same opportunity of value that we could get by having a single proposal around them. The champions before this presentation agreed that what we wanted to hear feedback from committee, but given that no one has said this yet, I thought it was important to bring this point up: that some might feel that measure by itself might not be motivated, decimal by itself might not be motivated. But if you put them together, I think the union of the proposals is quite strongly motivated. + +CDA: We are past time. + +WH: I don’t see a single unified proposal working for this. If you want to do arithmetic on decimal numbers, you shouldn’t have to worry about unit conversions. The proposals are distinct enough that they should stay separate proposals. Now, it is very useful for the proposals to coordinate with each other. But they should not be one unified proposal, because arithmetic is so different from some of the other things we are talking about here. + +### Speaker's Summary of Key Points + +* We broached the possibility of merging the two proposals, given their conceptual overlap. +* We also argued that at least two classes (decimal and measure) are needed, and possibly a third. +* We asked for guidance from the committee about how to deal with these proposals, procedurally, given that they are, on the one hand, clearly distinct, while also having a strong overlap. +* EAO presented a concrete suggestion for a next step, arguing for an opaque “amount” (to be understood as a synonym of “measure”). + +### Conclusion + +There is little to no support for outright merging of the proposals from outside of the champion group. There was some uncertainty about use cases for Measure. Adding conversion between units (measures) is regarded as a secondary/separate concern. Apparent support for having at least two proposals, possibly three. There was concern that keeping the proposals separate might cause us to fail to see the value of the sum of the proposals. + +## Stable Formatting update + +Presenter: Eemeli Aro (EAO) + +* [proposal](https://github.com/tc39/proposal-stable-formatting) +* [slides](https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit?usp=sharing) + +EAO: Stable formatting is a Stage 1 proposal that was introduced to that stage in 2023, and a particular thing—so the motivation here to start with is that we have places in particular where we have capabilities that we are offering under Intl, under the APIs available there that are useful for non-localization use cases for non-locale-independent things and also for testing to some extent or that could be useful for testing, because right now as it is defined, the outputs of all of the Intl formatters, for instance, are anything, any string or formatted parts and array. We have no way of validating that any of these things work as they are. This led us to have the situation where we have capabilities that we are offering for developers that they do make abuse of, and this means that they are kind of living dangerously because we might change the formatting at any time. But on the other hand, because developers are doing this, it becomes very difficult to change any of the details about how in particular en-US formatting happens because it is used—the parsed output there is used for things. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_101] + +EAO: So the sorts of things we’re talking about here, for example, for now at the moment before Temporal is available everywhere if you want to format the date using year-month-day, you used to be able to do it in JavaScript directly and doing something silly using a local in Swedish that happens to use year-month-day as the date format. Right now it’s also possible to use the u-ca tag, e.g., `en-u-ca-iso8601` to get the formatting. It’s not clear whether that stays stable as well or whether that ends up with different separators being used. Another example is, for example, if you want to do format a compact number using SI metric prefixes you can almost get it to work using English that happens to give you a capital K for a thousand and for a billion, it uses the capital B rather than the SI capital G for that. And then also not just the formatters but the other places—the collator/segmenter on Intl have capabilities that are only available on the root locale and right now you can get the capability if you happen to use a locale like English which does not override the collation with any customization and it will not be the case forever. We have the locale things being used for locale-independent reasons. This is not really that great. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_106] + +EAO: The Segmenter is another example that there is a note defining the general default algorithm but still recommended that tailorings of these is used. I think the ICU4X implementation also uses this locale independent algorithm when segmenting. So how do we fix this? How do we make the situation better? + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_111] + +EAO: And when this was accepted for Stage 1, I presented two different solutions that we could approach. The first solution would be to identify all of the things that ECMA-402 Intl stuff can be and is being abused and we are providing capabilities there that were not available in 262 and then finding ways to make those be available directly in 262. For date and date formatting we have Temporal of course. But then for most of the other cases that we can think of, there is no clear way of, how do you work with durations? How do you get number formatting to be customized? How do you do segmentation and collation? What if you need formatting to parts, for instance? Formatted parts is something that is only on Intl stuff. So this is a direction we could go in to look into these solutions and kind of fine tune things for each of these. And the benefit of doing this would be that it would not introduce any known localized use cases into ECMA-402 that currently ECMA-402 does not have. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_116] + +EAO: But the second possibility that we could go into is to add a null locale to ECMA-402 and this would not add any new APIs, but it would allow for the use of the value null specifically as a locale identifier. That’s currently an error. And it would be canonized to the code ‘zxx’ that is used for "no linguistic content", not applicable. It would be nice if we could use ‘und’ but that is an overloaded term effectively. But the CLDR has a clear behavior for ‘und’ and the behavior is relatively well defined in a number of different environments but specifically ‘zxx’ is not defined pretty much anywhere but it is a valid locale code. But defining behavior for ‘zxx’ would not conflict with any definition of what would be happening there. Then what we need to do is define explicitly what happens when you use a null locale in the Intl APIs in order to make those APIs provide utility and to solve the abuses of those APIs. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_166] + +EAO: That’s what I’m kind of here to ask you to accept, the second solution of adding a null locale to 402. And to explicitly define what does it mean when you use a null locale? And for this to be the direction in which to kind of start working on what the Stage 2 of this proposal would look like. Now the rest of this presentation I’m going to run through a kind of draft of what the Intl APIs would look like with null locale. This has been worked through with TG2 and this is bare bones of ideas, but at least a starting point for what would be useful for users, what would not add data size requirements and what would be implementable or should be implementable relatively easily. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_171] + +EAO: So as I just said. This is what follows. So the Collator with a null locale, it would use the CLDR root collation. There’s a little bit of variance here because of exactly how the browsers that are currently running on this work. So this is another thing I should note is that, yes, this is called stable formatting as a proposal. But when talking about APIs that consume localized content like the Collator, Segmenter and Upper and LowerCasing, these APIs are not like completely necessarily stable. But what is presented here is the stablest possible thing effectively for them that is also useful to allow for at least for now the same sort of behavior to happen on different environments where this code is run. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_179] + +EAO: These are in alphabetical order. For DateTimeFormat the idea is to match as closely as possible whatever Temporal does. Because `Intl.DateTimeFormat` goes a little bit beyond the formatting you can get out of Temporal, we do need to define exactly the sorts of cases for how that works and what is the output of each. Details details details. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_201] + +EAO: For DisplayNames, which as a refresher is giving you the display name, a localized display name of, for example, languages and regions. It already has a behavior for falling back to the requested code or `undefined` depending on whether the `fallback` option is set in its constructor. For `Intl.DisplayNames` we always do fall back for locale. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_207] + +EAO: For DurationFormat it would be ISO 8601-2 duration. This is a string that also starts with the capital letter P, and then there’s a specific format for how you get the output out of that. This, for instance, is used in the HTML time element and possibly elsewhere as well. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_220] + +EAO: For lists, we would be ignoring the type option. That one is defining whether the list is formatted as an “and” or an “or” type of list. And the list items would either be separated by a comma followed by a space or just a space. + +EAO: For `Intl.Locale`, this is giving information about the locale, I haven’t sketched out what that would look like. That would need to be done better. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_233] + +EAO: For NumberFormat. NumberFormat does a whole lot because we kind of overloaded it. The whole idea would be that the numeric part of the output would satisfy the StrNumericLiteral grammar also. But then because you can do, for example, currency or percent formatting, these need definitions. For example, currency, the output would have a numeric value followed by a space followed by the ISO currency code. Note that this is different from what English usually does because most locales put the currency code after the value, and all of the proposed other things here are putting the code or other identifier after the value. So to match that, that’s the proposed solution here. Without percent, it would put the percent sign after. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_240] + +EAO: For unit formatting, this needs a little bit more definition for exactly for the short form of unit formatting. We can define that table of what the identifiers that would be printed would be. We can derive it from the SI units and units close to those—using, for example, l for liters and capital TB for terabytes. The short unit identifiers, that will need a separate table for them. And also noting that compound units, that’s, for example, meters per second or otherwise, would work with a slash between them. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_246] + +EAO: We also have `notation: ‘compact’` as a thing. These would use the SI metric prefixes that we have defined for the values that this affects. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_261] + +EAO: And for PluralRules, this would also return the other category, no matter what other options you give it and what input for select or select range you give it. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_268] + +EAO: For `Intl.RelativeTimeFormat`, the result would also be ISO 8601-2 duration, but the prefix would have a plus sign or a minus sign to indicate the relative direction of the formatted time. This is valid in ISO 8601-2 specifically. + +EAO: And `Intl.Segmenter` would use the UAX #29 segmentation with extended grapheme clusters. Some details of segmenter collator, there’s an issue open on the repo for that for defining that more exactly how that goes. + +EAO: Then also we have a couple of places where we need to define the behavior in the `Array.prototype`.toLocaleString. That we need to have the definition as we use the comma as a separator. + +EAO: And toLocaleLowerCase and…upperCase string methods we use the Unicode Default Case Conversion for that. + +EAO: And that’s it. So a whole lot of somewhat Intl-specific implementation details here that we would need to polish up and put together into a Stage 2 proposal. But the key thing that I’m here to ask is that would it be okay to start proceeded with this proposal in the direction that I’ve here sketched out, or is there a need to either not proceed or to try and proceed in this other direction that the whole proposal has for it. When I discussed and raised this in TG2, it was like last week or two weeks ago, that group gave I think quite good support overall for “please let us proceed with the null locale direction on this one”. But that’s it for me. + +DE: So this is really interesting. This is a lot of stuff for us to define. I was imagining that such definitions would come from CLDR? Have you discussed it with CLDR upstream, are they interested in defining data for this? + +EAO: Not directly. The intent would be to explicitly define this behavior in 402 in order to ensure that upstream change in CLDR would not affect a change for our behavior. But it ought to be possible indeed to have the CLDR data for ‘zxx’ be providing the behavior as described here, so that the same pathways that we use for code right now could also be used for ‘zxx’, for the null locale as otherwise. + +DE: As you know, Unicode has many kinds of stability guarantees. I would prefer this be defined as something at the Unicode Consortium level with stability guarantees, and we are using that downstream. If we have notes or normative text in the ECMA-402 specification that indicates or repeats this information, that doesn’t seem bad, but I would prefer that the data be driven from the Unicode Consortium, unless they tell us that they don’t want to be responsible for that and they prefer that we’re responsible for it. + +SYG: I have a clarifying question. Possibility Number 2 does not require CLDR currently. Yet to use the null locale stuff mostly is still accessed via Intl. For an implementation that does not ship Intl, it would still not have access to the null locale? + +EAO: That I think depends on the implementation and how it decides to handle the requirements that we put on supporting 402. I believe it is currently technically valid for the implementation to support 402, but to support it only for a very, very limited subset of locales. For example, technically an implementation supporting 402 but only supporting the null locale would support 402 technically. + +SYG: I see. Okay, thanks. + +RGN: That was a great lead in because the kind of environments that Agoric care about would basically patch in only support for stability when it comes to formatting. This would allow an introduction of Intl specifically for deterministic behavior that we’re talking about here. I also appreciate that you drew a distinction between locale-independent versus stable. And I have a strong preference for the latter. It’s not clear to me that we get a lot of the benefits from this proposal if the behavior is locale-independent but can change over time, because then we’re right back at not having reliable consumption of the output. So strong support for it. I appreciate the distinction, and I specifically want stable, not just locale-independent, behavior. + +KG: So this seems like a good thing in general. And thank you for giving the presentation about what the behavior would be for each of these things, or for most of these things. It seems like for perhaps half of them there is some obvious canonical traits. For `array.toLocaleString` can do the same thing as array toString, that’s fine. For some of these putting the currency symbol after a space after the quantity of currency, like, to what extent is that the canonical answer how to do the local stable for currency. I would feel uncomfortable making arbitrary choices for any of these and assigning them to the sole canonical locale. If we’re going to be making arbitrary choices, I would be happier to have some other way of specifying things to ask for the particular behavior that isn’t locale-sensitive but also just ascribe canonical status to a particular region-dependent choice. And my question is, are all of these in fact canonical already or are they arbitrary choices that we’re making? + +EAO: Many of these are effectively arbitrary. Some are canonical, for instance, the duration formatting using ISO 8601 duration strings is effectively canonical. But the specific thing for example that you mentioned, the currency formatting, there are common practices for that. And when you look at the common practices across all locales, the common thing is to have the value followed by a space, followed by the indicator. Noting specifically, though, that because these APIs do support formatted parts output, it would be relatively easy to consume that output, in particular if it is in a known and well-defined order in which the parts come, and rearrange them for presentation if that is requested. + +KG: Thanks. I guess I’m fine with that. I do feel a little weird about declaring particular things to be canonical, but I see the value in it as well. + +SFC: There we go. I want to talk a little bit about the use cases here and how the use cases have overlap but also diverge. Three reasons why I think this type of proposal is motivated is, you know, because of course with this behavior that’s the title of the proposal, a lot of the issues we have seen previously about developers expecting that Intl APIs behave a certain way and then when that behavior changes because of language and locale changes, then their code breaks. So obviously that’s a use case. But then a second use case here is, you know, this idea of that we always discourage programmers from doing… a certain anti-pattern that I see people do all the time is like I have an application and I’m going to take screenshots of that application and check that the screenshots are consistent and then you upgrade Chrome and upgrade and they break. I call it the testing use case with the screen shot and using it as an example. That’s the second use case here. If you have an application that is fully plugged in with Intl and then you just switch the locale to the null locale, you can then have a certain variance that you can rely on for testing purposes. And then a third use case that has been raised in the TG2 meetings is this idea that, well, you wanted to have access to root collation and root segmentation that use rules that come directly from the Unicode standard that are not locale dependent and not possible to access because to use these must specify locale and any locale is support to tailings. The ability to access root collation and segmentation is not currently available and this proposal could make it available. + +SFC: Now, one issue is that all three of these use cases are all somewhat solved in this proposal. But all three use cases could also be solved in other ways. I personally think this proposal, given that it addresses all three, is a fairly narrow solution, and the fact that it’s a narrow solution is why it’s a decent solution. But it also means we have interesting questions about which of these use cases do we prioritize? For example, to discuss the previous point about number space unit, right. Also for durations, something about do we use the ISO 8601-2 DurationFormat? Like, this is really useful and it serves really well for the stable behavior, you know, value proposition that we have. Does it serve well for the testing use case? Maybe not, because I don’t know of any locale that would display durations in quite this form. It certainly wouldn’t be appropriate for—you know, especially in a long form testing things like do you have enough space to display your duration and things like this? Pseudo-localization is a better solution for the more problem. If you have this in the language, people will use it for testing even if it’s not the right solution. We can make it closer to the right one for the use case and doing number space unit, for example. I think that this is more—I guess my conclusion to this comment is that I think this is a proposal that solves a lot of different problems, and it might be good to sort of have a guiding principle about which problems do you consider to be the main problem that you’re trying to solve? And then use that to guide the specific behavior that we implement for each of these cases since we do have to look at each specific case. + +RGN: Speaking to your final question, I support approach Number 2 of the null locale/pseudo-locale type representation, and it makes a lot of sense, and it’s something I can see using. Thanks. + +JGT: So first of all, I think this proposal is great. I’m really happy to see it. I think to follow up with what SFC was saying, it addresses a lot of challenging cases today. The only concern I would have is it is pretty common to use undefined as a locale today when creating like an `Intl.DateTimeFormat` or other cases and only putting undefined because you have parameters like options need to be laid in there. For me at least, it is a little weird that undefined and null have very different behavior, and maybe that makes sense to the folks in this room, because we’re really familiar with those differences. But I would worry that less experienced engineers get tripped up with that. I was wondering if you considered alternate names that are strings, an actual locale name instead of null. That’s it. + +EAO: So specifically the string that is proposed to also work as an alternative to null is ‘zxx’. The reason why I’m proposing to also support null here is that ‘zxx’ is really hard to remember, and it is completely opaque to kind of “what does this mean”. And to a reader seeing an explicit null would probably more clearly indicate that no locale is the message being sent, rather than ‘zxx’. But in a situation where there is a potential or a perception that confusion could occur, ‘zxx’ could be used to explicitly differentiate this from undefined. + +JGT: Are we prohibited by the sort of syntax rules of that to use like a string like ‘stable’ or ‘unknown’. or something that doesn’t look like a locale and is more discoverable for people who have never seen it before? + +EAO: There are possibly some issues, in particular, I heard from the ICU4X team of introducing something other than what looks like a locale here, because that would end up impacting a lot of what they can do in terms of optimizations around locales. + +JGT: Makes sense. Thanks. + +NRO: We talked about this internally at Igalia and we have different positions. We did not share position. Personally I find it weird to have null for different behaviors and they are same with nullish coalescing and different with parameters. We should really try to avoid more cases for the difference. But on the other hand, we understand EAO’s point that ‘zxx’ is a similar random string, you are only going to know that the string exists if you know that the various ISO 629 codes work. + +LCA: (via queue) +1 + +SFC: Just to note we discussed this definitely on the TG2 meetings how currently the undefined value has behavior that is basically equivalent to the string ‘und’, that is definitely I don’t think something that anyone had actually intended, but that’s currently the web reality in all major browsers. + +KG: Does that string do anything? Is there a locale corresponding to the string? + +SFC: Well, there is but no browser ships it. So the `und` locale falls back to the host which is also what the value undefined does, fall back to the host locale. There’s correspondence there. And then the null locale corresponds to the other special string. So I don’t know if that changes anyone’s—I’m just making an observation to add to the puzzle. And the reason undefined is special because it maps to a specific value that also starts with the same three letters, whereas null does not map to that because it doesn’t. I don’t know if that changes anyone’s position. But throw it out there. + +NRO: This is a random idea, but if we don’t want to do null because of the confusion with undefined and we worry about the string, can we use something else and have a well known Intl single value and well known symbol like `Intl.StableFormat` if we pass this first argument? An example worth considering. + +SFC: My comment here is the proposal is to add the ‘zxx’ locale, right? And then null is basically an alias to the ‘zxx’ locale. If you look at it from that perspective, I don’t see there necessarily being a problem that we just add an alias to it. + +KG: I didn’t understand the answer about why we wouldn’t use longer strings. Something about optimization in ICU4X or whatever, but that seems like it’s lower down the stack. Like, surely in the JS part of this before calling it to the library, you can template that string to whatever other string you want. It seems like we should consider that space still open. Like, if we think that the string stable is more clear and discoverable than anything previously discussed and like just requires the translation at the boundary to some other thing that the underlying Intl library understands, that seems like that might be the best solution. I would like to consider that space open. I don’t think we are necessarily deciding on the exact way of getting the stable formatting right now. But I would like one of the possibilities to include particular non-locale-looking strings, unless there’s some other reason not to do it or I misunderstood what the reason not to do it was. + +EAO: So on that one, right now we are in a world where locale identifiers are becoming more and more regularized, and this means that the language for locale identifiers almost universally uses a two or three character primary tag and then subtags after that, that fit very well defined mold by now. And this overall does, yes, support grandfathering in tags where the language identifier is either two or three characters or five to eight characters like, for example, stable would happen to fit in there, but it would be really great if we were not to introduce effectively a requirement for supporting that sort of locale identifier into the world. Also noting that there is a need, for example, as noted here for the Intl collator at the bottom, which is why I moved to this slide, for adding subtags even possibly on the ‘zxx’ or null locale, and having a string identifier for that in addition to ‘zxx’ would certainly create an expectation for the subtags to work on the longer-form string. If we do not want to go with something like null as an alias, something much more discoverable would be like a well-known symbol like, was it `Intl.Stable` that was previously recommended? But I would want to push back against a longer string identifier here as an alias for ‘zxx’. + +KG: I’m not suggesting that you would introduce a new tag. I’m suggesting that one of the valid inputs for this would be a string which is not a locale tag, which at the precise boundary of the API is treated specially, if you say the Intl is treated differently as any other string and perhaps translated to a particular central understood by the other library or something and translated to ‘zxx’, I don’t know. But I’m not suggesting the introduction of a new language tag ‘stable’, I’m suggesting that one of the inputs to this be the string ‘stable’. It’s a different thing. + +RGN: Responding to a point that SFC made earlier, I disagree that the ‘und’ locale is equivalent to the undefined input for locale, because ECMA-402 privileges and looks for the value undefined and it’s what you get for instance if you provide an empty list of locales. Whereas ‘und’, at any point in time an implementation could start shipping and supporting it with behavior that as of that point would be different from undefined. What undefined does in ECMA-402 is defer to the current default locale. ‘und’ is not guaranteed to have the behavior. + +PFC: I’d like to build on what DE said a while back about recommending that this null or ‘zxx’ locale be defined by CLDR as part of that dataset. I think there’s a really good reason to require that it is defined as part of the locale dataset. The reason is, in Test262 we have been very interested in how to write locale-sensitive tests for functions like toLocaleString and the classes that live on the Intl object. And locale data can be updated as understanding of best practices changes. So it’s difficult to find a balance between writing your tests to compare to a certain output, but also anticipating that the desired output might change over time as the data gets updated. So this null locale would be very helpful for writing tests like that. And if we defined it in the spec so that it was a special case, so that the formatting was defined outside of the CLDR data tables or whatever, then there wouldn’t be much point in using that for testing because it would be testing a separate codepath in implementations. So I personally think it would be better to require that the data source is defined elsewhere outside of the spec. + +EAO: As I reply to that, just noting that I do believe that the current sketch of a proposal for these APIs, that the formatting behavior presented there, should all of it be representable and implementable in CLDR. The intent with the presentation of the direction here would be for possibly us to define what makes sense for JavaScript. How should JavaScript work in each of these cases, and for the implementation side of that, either to, yes, go to CLDR and get agreement from them about those behaviors, or then ensure that it’s possible to overlay custom data on top of CLDR that ensures that this exact behavior comes out of it. If it’s not possible to get that already directly within CLDR and to have sufficient guarantees about stability from CLDR… I do not believe that the CLDR currently, for example, guarantees that specifically the patterns and the formatting and so on for any locale are as stable as we want to have for ‘zxx’. Therefore, my initial desire to have the behavior be defined in 402, but to have the implementation, yes, coming through the same pathways that other formatting uses. + +SFC: This is my comment about a stable Collator. A comment that I think RGN made that I just wanted to be clear about is that `Intl.Collator` and `Intl.Segmenter` one of the cases is get at the root collations and root segmentations, and it is worth noting that these are not necessarily hundred percent stable, and when Unicode adds new code points or emojis or scripts, the behavior here will change. Because previously if you had text with a certain script, it will sort differently. Because previously those were undefined code points. It can also be the case that Unicode—this is not CLDR, but Unicode—will discover something new about a script that previously existed, and I know there’s been a lot of changes going on with the Mongolian script and so forth and the collation rules and segmentation rules might also change. I just wanted to clarify if that’s a concern. You know, one path you can take for collation, if you’re using the ‘zxx’ locale `Intl.Collator` left the graphic sorting on the UTF 16 bytes. That is stable. But it’s also not the root collation that we would like people to be able to access. So I just wanted to probe if that’s a thing that we should be considering. + +RGN: I think yes, and yes it is a concern. What is valuable here I think is not access to the root locale, but access to deterministic stability. Access to the root locale is itself valuable, but shouldn’t be mixed together with the concept of stability. So for collation, for instance, I would expect it to be strictly based on codepoint value and therefore not change when a codepoint shifts from undefined to being associated with a character. + +SFC: Follow up with that, you feel the same about segmentation? + +RGN: Yes. + +SFC: UAX#29 segmentation will change, and the grapheme clusters will change. + +RGN: So there are two different kinds of change there. One is because UAX#29 segmentation is dependent upon classification of characters, you know, what category they fall into, that a new version of Unicode can change. And therefore that would have an impact on segmentation. That to me is just part of the progression of Unicode as a simple collection of characters and is not concerning, because there’s a whole lot of other things that come along with that, and you already have a dependence in the form of regular expression property escapes. The second kind of change would be of the rules themselves, a revision of UAX#29. And for that, I would hope that, no, we would stick with stability. We would actually snapshot a particular revision of UAX#29 and commit to that for all time in this stable behavior. + +SFC: It would be difficult to implement a forever stable UAX#29 segmentation rule, but I think this is something that we can discuss later. We had time on the time box and we can continue to probe this on the TG2 meeting. + +EAO: So noting, yes, what SFC said and was discussed, another way of putting that is that this proposal specifically provides for slightly different behavior. It provides for stable behavior for everything except for `Intl.Collator`, `Intl.Segmenter` and the toLocale{Lower,Upper}case functions because those are consuming localized data, and for these, there is very useful sort of root locale behavior that is currently not accessible and it would in general be very useful to have it become accessible, but that yes, relatively stable but not completely stable behavior is not, well, stable. One thing that could be explored here possibly is introducing a way specifically for these Collator, Segmenter and toLocale{Lower,Upper}case would be the way for these to access the root locale explicitly using either the string und and what CLDR and Unicode use for it or some other method that we probably ought to discuss in TG2 further. + +RGN: Agreed. I strongly support that. And in particular support it in a way that is distinct from requesting stability. + +EAO: And if implementing something like that, in particular for something like `Intl.Segmenter` it is—there is utility I think, high utility in being able to access UAX#29 segmentation, but introducing a requirement at the spec level of always supporting a very specific version thereof, seems like it’s introducing a cost that is maybe not worthwhile. So for something like that, the `Intl.Segmenter` with a ‘zxx’ locale should be doing something different, which is a topic we ought to be discussing later in more detail. But none of the proposed things for the Intl APIs here is meant to be the final word, just the best guess so far at what ought to work and what would be useful for developers and users. + +RGN: To be clear, if it is not fully stable but partitioned in this way, then what you’ll see is that an environment which needs determinism would just exclude support for the unstable APIs. You would have for instance `Intl.NumberFormat` but not `Intl.Segmenter`. In a strict technical sense that is not actually supporting ECMA-402, but in practice that’s just what you get. + +USA: If that was it, then I think EAO you could dictate a summary. + +### Speaker's Summary of Key Points + +The alternatives were presented, and support was given for introducing a ‘zxx’ locale for stable formatting. The proposed alias null for ‘zxx’ was discussed, and some concerns were raised about its closeness to undefined, which has different behavior in this API. Alternatives to null as an alias that were proposed included a well-known symbol (`Intl.Stable` I believe it was), or a longer string identifier. The non-stable root locale behaviors on `Intl.Collator`, `Intl.Segmenter` and string toLocale{Upper,Lower}case were discussed as distinct from having stable behavior, and further discussion will be required for determining how to make those APIs be able to access the root locale rather than exhibit stable behavior. + +### Conclusion + +* Stable Formatting [PR #18](https://github.com/tc39/proposal-stable-formatting/pull/18) can be merged. +* The alias for ‘zxx’ will need further consideration +* If ‘zxx’ explicitly means “stable”, we may need another special locale identifier for the root locale. + +## `Error.captureStackTrace()` for Stage 1 + +Presenter: Matthew Gaudet (MAG) + +* [proposal](https://github.com/mgaudet/proposal-error-capturestacktrace) +* [slides](https://docs.google.com/presentation/d/1SFdS9n5JR7Jqz29s7ApvkqDOqOfPW-IaBR2orK828As/edit?usp=sharing) + +MAG: As the title says this is the proposal stage 0 to 1. `Error.captureStackTrace()`. So Chrome shipped `Error.captureStackTrace()` a long time ago. I don’t actually have an original date. But can I find reference of it as early as 2015. It’s been around for a long time. It was a Chrome only API and didn’t pose much in the way of web capability issue because if somebody tested in Safari let’s say, they would catch this problem. However, in August of 2023, JSC/WebKit shipped the method. Now in order to avoid web interoperability issues, we will ship it. I have an implementation and just need to have the time to unflag it. Maybe we should spec it with three engines ship it. That’s why I’m here. The documentation what this thing is and what does it do is largely contained in the V8 stack trace documentation. + +MAG: You give `Error.captureStackTrace()` some object. This can be any object and it will apply a stack property to this in some manner that will give you the current stack. So you can just give it an empty object and pass and call error capture stack trace with the error property on it with the current stack. There’s an optional object called constructor that allows to elide frames that came before this. If you see a stack, you will not give any frames until you see this constructor. If you give it to something that hasn’t been called you get an empty stack. There is some divergence in the implementations. So, for example, V8 applies and installs a getter property. JSC defines a string-valued property. We are following JSC right now but this is a point of discussion. + +MAG: Should we treat objects which have existing `[[ErrorData]]` slot any differently? Right now the answer is no. So if you, for example, have an object and you delete the—you have an error object and delete the stack property, if it’s on the object, it’s on the prototype, you hide it. And then you apply error capture stack trace and now you have an own property that is stack and then you use the maybe captured stack getter to check what the original stack trace is, is it been censored? This is an implementation decision that could decide we could spec if we decide to spec it. That’s really it. I mean, this is a Stage 1, 0 to 1 proposal. There is this thing. There’s not a lot of design space here. There exists implementation that’s been around the web Chrome only passed for a decade. Probably don’t want to change too much. How we should spec it, we have two different choices. Really right now the ask is should we do Stage 1? With that, I open it to discussion and questions. + +JHD: I mean, I’m not sure if it was common knowledge in this room that JSC shipped it that long ago, but is there a reason that they need it? I can’t click on the link from the slide I’m looking at with my eyes. Why was it shipped in the first place in JSC? + +MAG: So the stated reason and if KM or MLS is on the call can maybe weigh in on this but KM weighed in by doing this it made a benchmark called web tooling benchmark four and a half percent faster. We did also see a small improvement on this same benchmark. We are shipping it because we get a slow trickle of bugs that the website didn’t test in Firefox and now is broken. + +JHD: Right. So I understand if it exists in two browsers why the third browser must ship it. I’m questioning that the precondition there or the condition there, like, if the only reason to ship a not even fully compatible implementation for JSC was to make a benchmark a little faster, can they just unship it? That implies that the web didn’t depend on it. + +MLS: So why we want to unship it now at this point if we had it out there for coming upon two years and we are web incompat issue and Chrome uses it and we ship it in Safari and more people use it. We basically break ourselves. I don’t see much motivation doing that. + +JHD: It’s already different and some aspects that won’t work the same any way. + +MLS: The stack traces are different and we have the API. + +JHD: The contents. The getter versus the string property. I mean – + +MAG: Don’t really know the difference. + +JHD: If that’s the case, that also should mean we’re free to specify either choice and V8 and JSC should be able to match it. If people can’t tell the difference, it’s not a web reality issue. + +MLS: Depends on what level of difference developers are willing to tolerate. But now he wants to do some standardization of this and then we have some discussion of what implementations that are already shipping it would be willing to – + +JHD: I guess as a reasonable reply of why JSC would choose not to ship it because it’s just creating bugs, right? I get that. But that still tells me that there is some design space in fact for what it does and exactly how it works in the sense that there are three different sets of behavior in the three browsers right now and separately the fact that JSC shipped it for no reason except to motivate a benchmark is what it seems and it means it wasn’t users asking for it and then I don’t know if it was—how it was announced it was shipped or if anyone even noticed because it was news to all of us in this room. + +MLS: We announced it on our preview version all the things that we add. But we didn’t broadly announce it. It wasn’t a standard, right? + +JHD: That’s what I mean. I wonder if anybody even noticed. And so – + +MLS: I think they would notice—we would notice now because we get bugs because now you don’t have this anymore. + +JHD: Okay. + +DE: For this conversation between JHD and Michael there are multiple ways of defining and analyzing web compatibility. There’s the theoretical way used a lot in EA-6 and talk about intersection semantics and something is not supported by all the browsers or most of the browsers, it doesn’t really count for web compatibility. That was used to have Annex B 3.3 for sloppy mode function hoisting and something that was web incompatible and the browsers had to fix it. The way that browsers analyze web compatibility is more empirical and more what might actually be going on than this abstract intersection between all the browsers. One empirical thing that happens is a lot of websites target the mobile web that is WebKit plus Chromium, unfortunately. And if a function is there for a couple years in the mobile web and people can depend on it, it’s quite likely. So, sure, that’s theoretically unmotivated that the mobile web should be a thing to maintain compatibility around but it’s practical. + +DE: Overall the burden of proof when thinking about the web compatibility things is kind of I’m the one that wants to change something that is already shipping. Because it ends up being a lot of work for browsers to either investigate further whether something would be compatible or to ship it and see if something goes wrong. So our standard—I mean, our default position should be just not actually changing those kinds of things rather than the default position being you haven’t prove it’s really necessary or something like that. So I think it’s a little bit round about. That’s all. + +JHD: Regardless of the outcome of this proposal and this item, it would—I’d like to request that all the browser implementers in the room, if they’re going to ship something that’s not in like HTML, 262 or 402, the sets of standards that we would consider to be standards, please bring it to this body first just so we’re aware of it. + +MAG: That’s why I’m here. + +JHD: That would give us—thank you MAG. That would give us the chance before two browsers have shipped it, we can figure out if there’s a thing we need to standardize and specify in a way that avoids compatibility problem and make sure all the browsers ship at the same time and make sure all the other ones if it’s already in one. I guess I’m just asking that if you’re going to ship something that is on-standard that you kind of give us a heads up. Not asking for permission. That’s not the dynamic. But just letting us know. + +SYG: Which item is—I just entered that one. I meant to go after MLS and the what. + +CDA: It was abrupt. I thought it might have been a response to what was being said at the time. So so if you want to talk about the benchmarks. + +SYG: I hope to impress on the room there that JHD I think it’s important to realize that benchmarks is not some flimsy reason things get done. It is one of the deepest fundamental reasons that anything gets done in JS VMs and if you frame it it was just for a benchmark and you can undo it, that is almost never the case. + +MLS: I’m next. What drives our development is benchmarks and features and security mitigations or security features. Then the question that somewhat rhetorical but we’re going to ship something and anything this there was a defacto standard and we’re going to ship something that we come up with and something that we come up with is not standardized. Who do we communicate to? And if it’s JSC only or Safari only, you know, which we do, and then we think we standardize it and be able to access pay on the web page, I think we’re champions of that. So are you saying that we should at every plenary say, okay, in the last two months we shipped these features that are not part of 262 or 402 or anything else and we shipped these in JSC, we want to know. + +JHD: Yes, if it is a JS feature. If it is CSS then the CSS group may be a better place to bring it. Yeah, if you being any one browser think it’s worth shipping a thing that’s not part of a standard, there’s a motivating use case behind it. It may not matter to the rest of the group and it may not be—maybe you’re trying things out and not standardization, that’s still fine. But the whole point of this sort of collaboration is that we can get input from perspectives that we may not have considered. + +MLS: Should access and battle done the same thing? + +JHD: Ideally, yeah. + +JHD: I mean, I’m not asking for a requirement. I’m not saying everyone must do this. I’m requesting in the spirit of collaboration, that there be maximum notification especially when it’s early enough in the process that things can be caught or changes could be made. The deviation on the previous – or one of two slides ago could have been – + +MLS: In this case, you have Chrome shipped it for like ten years. + +JHD: Right. So the existence of it in Chrome is not a surprise. And everyone knows that. + +MLS: And in this case, it seems like unwarranted we would need to, you know, by the way, we shipped something eight years after Chrome shipped it – + +JHD: I mean, there’s three browsers. When two ship a thing, that’s a meaningful ship and it would be hopeful we all have the opportunity to be aware of the possibility before it is too late? + +CDA: I’m just going to interject real quick. I think that’s a great topic, the conversation about keeping folks in the loop. I see DE’s comment on there. It does strike me as a little bit orthogonal to this topic. So maybe it might be best to move on. + +DE: Do you want to decide who is the next speaker, then? + +CDA: I don’t want to completely stifle the discussion. But SYG please go ahead. + +SYG: I think if you think this whole discussion is not productive for now, I would rather we go on to the actual—the next new topic. + +CDA: Just be clear, I think it’s a really interesting topic. I just feel like it’s maybe deserves its own—we’re here talking about error capture stack trace and not the meta problem of, you know, browsers shipping things that might be of interest to the committee. + +MM: I just want to make the distinction, I think JHD and I are aligned on, which is the thing that distinguishes whether it’s so to speak on JavaScript versus a host thing is. “Is it a property or behavior on a JavaScript intrinsic”. In this case it’s obviously a method on the error constructor and on the JavaScript intrinsic and agree with JHD, there’s no requirement here, but it would be very helpful and I just want to point out that if the thing that’s been in V8 forever that’s supposedly is no surprise and therefore nobody would have particularly benefited by being informed or hurt by not being informed, that case turned out V8 recently changed the data property from their created by their prepared stack trace from their own data property to their own accessor and that caused us and companies that collaborate with us to have to do a mad scramble around an introduced insecurity that took us by surprise because nobody thought it was interesting enough news to inform people about. And turning it into an own accessor was a real disaster and still a security problem for us that we cannot fix pleasantly outside of the language. Yes, please if it’s on an intrinsic, then it potentially is interesting to many people here. + +DE: We can skip. This is an interesting meta topic for later. + +MM: So just sort of closing the loop on the same point in another way, with regard to capture stack trace, I like the direction that this thing is going with regard to it making an own data property, I just want to say that our position is we would not accept this proposal if it were creating an own accessor property. We like it as an own data property. + +SYG: One of my greatest regrets is having to deal with `Error.prepareStackTrace()` is that do you want to standardize that as well or just `Error.captureStackTrace()` that magically makes a stack trace property? + +MAG: Yup, I have zero interest in trying to pursue `Error.prepareStackTrace()`. In the absence of it become web compat problem, I don’t plan to look at it. + +SYG: Sounds good. Okay. + +CDA: That’s it for the queue. + +MAG: Implicit ask being for Stage 1. I would be willing to push this forward on the data property direction. Any objections of support? + +JHD: Sorry. I wanted to talk and didn’t put it on the queue. But can you go one slide more. + +MAG: I will attempt to. + +JHD: So the error prototype stack getter is the next topic on the agenda. So I can talk more about that during that item. But I would say, yes, is the correct answer here, the capture stack trace isn’t monkeying with internal slots, it’s just installing a data property. + +MAG: That’s what our implementation does today and it makes perfect sense. And it is a design that people may have different opinions, but I agree. + +JHD: Yeah, with that on the record, better to ship it than—Stage 1 is fine. Web compat is a problem. + +DE: I think this is a really good proposal. There’s a lot going on with errors that is hard to unify between browsers and the things that JavaScript engines in general and the things that we can, we really should specify. It’s great to get capturing the stack trace faster. We have use cases inside of Bloomberg where we want to capture the stack trace and turn errors lower. So I support Stage 1. + +MM: So, yes, support Stage 1 with data property. + +MAG: With that, I can cede my time and I can jump ahead if we want. + +CDA: Any objections to Stage 1 for captured stack trace? You have Stage 1. All right. + +### Speaker's Summary of Key Points + +It sounds like most people would strongly prefer that 1) this produces a data property 2) That this does not interfere with or touch the `[[ErrorData]]` internal slot should it exist. I will pursue those choices. + +### Conclusion + +Stage 1 advanced + +## Discussion about shipping non-standard features + +CDA: We have like eight minutes left and now I realize I sort of stifled a little bit of the discussion on the topic of vendors and JavaScript implementers shipping things and notifying the committee; if that’s a topic folks would like to return to, we do have a few minutes. + +JHD: I will say I’m not proposing a process change. It’s just a polite request. There are people here who care and would like to hear about stuff—non-standard stuff before it gets shipped. It’s fine if there are people who are here who don’t care and don’t think it’s valuable. If you don’t want to take up plenary time and throw the issue over the wall on Reflector or drop something in Matrix or finding a way to get heads up would be a courtesy highly appreciated. I don’t know if there’s more to discuss beyond that. + +DE: I think DRR has had a good model in TypeScript in explaining what kinds of features are coming sometimes to the committee. This is useful so that for various different, you know, JavaScript super sets and JavaScript with the extra APIs or extra syntax, it’s useful for us to know what’s going on whether it’s before or after shipping, you know, obviously earlier is kind of nicer, but sometimes it feels too early. Of course that’s up for whoever is doing the presentation. It’s really important that this is not understood to be time to object. Otherwise, we’ll just scare away presenters. But I think even though traditional feels a little bit off topic because we’re always discussing proposals advancing stages, just having presentations about what’s going on will be really helpful for the committee. + +CDA: Any other comments before we move to the next topic? + +MAG: I am curious for JSC and V8, are there non-standard things that you’re shipping or planning to ship? Like, at least for us for JavaScript, the only non-standard stuff we got is very internal facing, so it’s not exposed to the web and exposed only to developers within ma zilla and what is the plan for shipping or plans to ship non-standard stuff or behind nonstandard trials and other boundaries to stop it from escaping containment? + +MLS: We typically don’t ship non-standard features. We tend not to do that. Most of the work we do is features and security mitigations and performance tuning. It’s rare that we ship something that is not standard. SYG had a comment that if you want to see what is going on, there’s email lists for both Chrome and for Mozilla and the STP release notes, be sure you put this change, that change in the release notes and every two weeks we get to add things to our—I understand what this is, is this okay? Yes. Okay, you describe this in a way that makes sense? + +SYG: There’s nothing new on the pure JS side that we’re planning to ship that is nonstandard. At this point I think anything new that has observable behavior poses too great an interop risk and, that said, V8 shipped stuff in the distant past that we continue to live with like capture stack trace and there’s also, I think, what is it called `v8.BreakIterator` or something that was superseded by `Intl.Collator` that we would love to remove but unfortunately people still use it. So there are examples of things in the distant past. But we’re not planning to ship anything new that may have any observable behavior that would pose any interop risk. An interesting thing that we may ship and that is in OT right now is this compile hints thing that is purely for hinting when to parse something to improve startup speed. And there is nothing observable going on there. And this is a thing if you were in Tokyo that my colleague Marja (MHA) presented back then. + +RGN: `Intl.v8BreakIterator` was superseded by `Intl.Segmenter`, but still exists. + +SYG: We would love to unship it but have to wait for the use counters to go down. + +DE: So I’m glad you presented on parse hints. Even when something is expected not to have interop risk by parse hints can still be interesting and hopeful for everyone to present it to committee. I hope that as this or other features of all, you can bring back to committee for future discussion, I also note that parse hints are very important to have tooling adopt for their effectiveness and TC39 is a great way to be in touch with tools to get visibility. + +JHD: Yeah, so to completely echo everything DE just said and I think that if a thing—the whole point of open source is that the more eyes see a thing, the higher the chance the problems will be caught and things will get to a better direction. There’s obviously too many cooks check and balance there, but, yeah, I would love to see more early collaboration even about things that have no interop risk or not expected to be used in other engines. + +### Summary + +The committee discussed preferences for notification when implementers are shipping features that don't currently exist in the language. + +## Error Stack Accessor + +Presenter: Jordan Harband (JHD) + +* [proposal](https://github.com/ljharb/proposal-error-stack-accessor) +* no slides + +JHD: So error stack accessor. I have had the larger error stack proposal going for nearly a decade now. Some of the feedback I got the last time I brought it, I discussed that was the last in-person plenary or perhaps the previous one, was to try and split it up so that each piece the standardized existing stuff piece and the add new capability piece were able to be discussed and implemented and advanced separately. I’ve done that. This proposal is attempting to only standardize effectively the stuff that’s already there. The spec here is hopefully pretty straight forward. Basically it’s an accessor property on the prototype. It doesn’t belong on individual errors. The getter throws if it’s called on a non-object. For web compatibility it is not called on the error object, it returns undefined, and implementation-defined being the magic spell of browsers can keep doing exactly the same thing they’re already doing and not trying to step into that minefield. The setter, for the same web compat reason, throws on a non-object and throws the argument and will always get one and if not an error object and sets own data property on the error instance with whatever you pass into it. So it shadows the getter. The getter will continue to work if you borrow it and dot call it on the error object you still get the original stack. That is how all accessor properties on prototypes that reveal internal slot information work in the language when there’s a shadowing owned property on the instance, the accessor when borrowed still has the factor and that is important for the language and that is the capture stack trace and if you use capture stack trace and provide alternative stack by eliding frames that is still just the shadowing on the getter property and the getter can pierce through that and return the slot value or you know it’s not actually a slot value because it’s not stored in the error data slot. But that’s also a bit of a hand wavy thing because it’s very complicated to try and figure out how one would store that thing in the slot without also having to describe how one constructs it and what its contents are, and that is something that separate and future proposals should be focusing on. So I’m keeping that out of this one to try and meet the feedback I got about splitting up the proposals. + +JHD: There are still some open questions that will need resolution before advancing beyond Stage 2. The answer to them will be some combination of “do the research” and “what is the union of what browsers already do”? What would be the ideal behavior? Is it possible—like, web compatible to change to the ideal behavior if it’s different and are browsers willing to make that change in that case? Those are a lot of ifs that will likely result in it just more or less matching what the majority of browsers already do. But these are perfectly acceptable and expected open questions that can be resolved within Stage 2. + +JHD: So I am hoping to advance to either Stage 1 or 2. And I would love to hear any thoughts on the queue before I ask for that. + +DLM: So we started collecting some telemetry on what was proposed a few weeks ago and the initial results are positive and everything that JHD mentioned would be web compatible. These are results of nightly builds that aren’t typical of the user base. But I think this is a good idea and definitely support it for Stage 1 or 2. + +SYG: I have a question for DLM. I thought SpiderMonkey already had a getter-setter pair on `Error.prototype`. What is the telemetry data for? + +DLM: Specifically checking for the `[[ErrorData]]` internal slot as well as making the setter require a string, and those changes seem web compatible with the data I have so far. + +SYG: What was the second one. + +DLM: The setter I believe is specified to do nothing unless the argument is a string. It seems like that would be web compatible as well. That is not what JHD was sure about. + +JHD: The current specification on the screen does not check the type of the setting, the set argument. But there’s an open issue discussing that. What the current spec does with the setter is it requires that the receiver be an error object. Personally I would love to restrict as much as possible and so I’m glad to hear that checking that being a no-op when the assigned value is non-string would be web compatible and I can update the spec text in that event. + +OMT: I just wondered if the steps there include not having stack traces, because I don’t think the spec explicitly requires – + +JHD: Yes, that’s correct. It says that represents it stack trace of E, that is hopefully walking a distance of saying if you have stacks, put them here, and if you don’t have stacks or have security reasons why you don’t want to give one, it’s cool. Implementation defined means an empty string qualifies. + +OMT: I support that for stage 2. + +JHD: At return it would be great to be in a world where stack traces are fully defined in the spec. But that is lots of work and many proposals away, I suspect. + +OMT: I agree that for define. + +JHD: I see the queue is empty. I guess first can I have consensus for Stage 1? The problem statement being specifying the currently non-standard stack accessor and mutator on error prototype? + +CDA: +1 for Stage 1 and 2 for OMT and SYG has a question. + +SYG: I have a question about this error data internal slot check. So given that V8 is nonstandard implementation is able to manifest stacks on non-error instances, on non-native error instances, if you standardize this—if we standardize this, it is still the case that V8 will have those stack accessors or own properties or whatever they are—I think they’re own accessors right now—that don’t live on error instances. + +JHD: That’s correct. Setting aside that there are folks that really want to see the own accessors go away, that accessor is a completely distinct function from this one. And capture accessor stamps on the object is unaffected by this essentially. + +SYG: That ties into MAG’s proposal, I guess there needs to be enough leeway built in then to capture stack trace that doesn’t prohibit it from stamping a stack onto non-error instances. + +JHD: I’m trying to think about that. If `Error.captureStackTrace()`, I assume for web compatibility reasons and V8 compatibility reasons, must be able to put a stack string on to any arbitrary object. And that must not be prohibited. And that otherwise sort of defeats the point of that proposal. Does that align? + +SYG: I’m just confirming that it basically needs to have that allowance for web compat. But if the thing you’re putting the stack on to with the error instance with the error data, then this getter then kicks in and then it has the semantics. I want to confirm that that is the intention. + +JHD: I think that would be an open question for `Error.captureStackTrace()`: if you do `Error.captureStackTrace()` on the error object and alter the stack trace, should that alter the internal value on the stack trace on the error object such that the accessor reads it, or should it be completely unrelated? The one slide in the `Error.captureStackTrace()` proposal that I commented on should say it’s completely unrelated and that you cannot use captureStackTrace to censor the actual stack as long as you have this getter available. It would be an alternative implementation of `Error.captureStackTrace()` that inserts into the slot on error such that the getter reveals the censored stack but I don’t think—like, that’s a cross-cutting concern. But I don’t think that’s a specific item in this proposal. It’s something that a choice within captureStackTrace will determine without any change required in this proposal. + +SYG: Right, okay. Thanks. + +MAH: I have a clarification question for SYG. You mentioned that there may be own accessor properties left by some implementations, what cases do you have in mind? + +SYG: Nothing concrete. I think if this gets standardized likely one way to go here is that the own accessors disappear, because we standardize on error instance that they be a prototype accessor, but that’s only for error. For the non-error cases, then we would have a choice to manifest those stacks as own accessors or own data properties that are magical somehow. And it would be like since I know that you, Agoric, really wants to get away from own accessors, that would be a time to try to present those as own data properties instead. There’s no concrete use case that we have for them to be own data or own accessors. But that would be a natural place to try to get away from it. + +MAH: For captureStackTrace were in favor of just defining a data property as we mentioned. There may be ways of having it an accessor if you’re really interested in a lazy evaluation of the stacks for when it gets accessed, not when it gets defined, but that’s a topic for discussion on the captureStackTrace proposal. + +SYG: We would prefer it not be—to give it more context to the behavior change for stack accessor that caused the bug for Agoric the reason it was changed is we had a bug, V8 had a bug prior to the change being an actual getter-setter pair it looked like own data property. Because it was lazy under the hood, and because we have the hateful thing of calling prepareStackTrace that is user code you have the case where you have the own data property with arbitrary side effects because it ended up calling to a possibly a user-set captureStackTrace. So to recover the invariant that data properties ought not to cause arbitrary side effects we made a smallest Delta change which was to make it into an own accessor pair. That’s how we got to an own accessor pair. I would not be in favor of having a magical data property that is lazy. That is something that we need to discuss going forward for like, what is the compatible way to do that. And if we don’t want to go back to that world and you really want a data property and that precludes some kind of laziness, does that matter for the non-error instances? Probably not. But we should talk it through. + +MAH: That was going to be my question. Does it leave this matter in that case? That is another for– + +CDA: WH is asking what the Agoric bug is. + +MM: I can clarify that. The Agoric bug is because syntax can cause the virtual machine to throw an error and because the accessors were own accessors, it was not possible to virtualize the environment, to prepare the environment, to virtualize stack access by replacing, for example, what would have otherwise been inherent accessors on the prototype that we can replace in the prelude. Now, what is worse, that by itself was not fatal. What was fatal is that all of the own accessors have the same getter-setter, which obviously means on error objects, since they’re the same getter-setter they have to reach for the internal data any way, so they would have had the same behavior had they been on the prototype. But because they were the same getter-setter, that getter-setter pair was undeniable because it could be reached by syntax. And you could then had a global communication channel through objects that had the internal `[[ErrorData]]` slot, where one compartment could get the getter and in another compartment could get the setter. If they had the common access to the object that otherwise should not have enabled them to communicate, they could communicate. + +MM: It’s worse than just that they could communicate. If the setter had restricted things to strings as one might have expected, then it would be an information leak. The getter-setter pair did not restrict it to strings at all. You could pass arbitrary values through the undeniable getter-setter pair. The whole thing is a mess. What we’re doing to be relatively safe in the face of it is unpleasant and does not restore our safety guarantees. We have the burden of explaining the lowered safety for breaching capability leaks between compartments. It’s just a mess. Does that answer your question? + +WH: Kind of. How is this a global communication channel? + +MAH: It effectively connects single global weak map instance that can use getter and setter to access the information for any objects. + +MM: So remember that the presumption is that objects that are obviously stateless and frozen—if the object obviously has no hidden state, then sharing that object should not enable communication. + +WH: Okay. So this thing lets you attach an arbitrary field to any object whether it’s frozen or not? + +MAH: Yeah. It’s the same kind of issue as private fields stamping for return override. You get to add information to an object that otherwise looks like it doesn’t carry any information. + +WH: Okay, thank you. + +MM: Thanks for prompting us to be explicit about that, because it took us a good long time to understand. It is kind of subtle. + +MM: SYG, I wanted to find cases to understand what kind of compatibility burden it might be for V8 to switch to the pair of behaviors we’re proposing to standardize here with this presentation and the previous one? First of all, for everyone interested in captureStackTrace including JSC and the proposal we just saw, I wanted to understand what the use case is that motivated compare stack trace and if we believe that the vast majority of actual usage stays within that motivating use case? So the particular motivating use case I have in mind is object that actually do inherit from `Error.prototype` but are not primitive errors, they’re just plain objects, and this basically dates from before ES6 classes, when if you wanted to create what is effectively a new category of error type, a new error type, you would emulate that by having a plain object inherit from `Error.prototype` and capture stamp the error on it. Everyone interested in `Error.captureStackTrace()`, is that everyone’s understanding of the motivating use case is? Anyone have data that actual usage deviates from that pattern? I’ll take the lack of response to mean that nobody knows, unless somebody wants to say they know something. Okay, thank you. + +MM: So the other thing is that the—for Error objects because the own accessors do have the same getter-setter and therefore must behave by accessing what is effectively an internal property for the Error object specifically, leaving aside the non-errors, I would think that moving the accessors up to error prototype should not affect the getting behavior. The setting behavior is more subtle, of course, because now rather than modifying the internal property, it would be overriding it on the instance with the data property. But it’s hard to imagine that much actual even V8 specific code would break from that. I’m wondering if you have any intuition about that and specifically SYG or V8 people, if you have any intuition about that or even better, any data pattern? + +SYG: Sorry. I think I lost the question. The question is, is there a compat worry with the setter behavior described here? + +MM: Yes. The question is both. But I’m separating it into two questions. The first question is, just moving the getter up to the prototype since it’s the same getter on all of the accessors any way, if you just inherited rather than having it be own for the error object specifically and for getting specifically, do you expect there to be any compat problems? + +SYG: We both expect and hope there to be no compat problems for the getter. + +MM: Right, okay. So now for the setter, moving it up to the prototype. The natural behavior for the setter which is what JHD is proposing is that it create an own data property on the instance that is clearly different than what V8 currently does for error objects. Do you have an expectation about what kind of incompatibility that would be for V8 users? + +SYG: Unfortunately not at this time I just don’t know. I agree there is more risk there. But I just don’t know who is doing this. + +MM: Okay. And then finally, for non-error objects, if there might be an incompatibility caused by one aspect of JHD’s spec that I would propose to relax if it actually causes pain for V8 to adopt the proposal, which is the setter if given a non-error object, even though the setter only creates an own data property, which is it could have done on any object, it checks the existence of the internal slot and rejects, I would want to keep the rejection on the getter but the setter could waive the type check and add the data property to any object and then once of course the data properties is on the object, then the normal get on the object would get the data property so the fact that the error check remains on the getter would not affect what I would expect to be normal usage. + +JHD: You’re talking about removing step 5 from the setter? + +MM: That’s correct. And since `Error.captureStackTrace` also adds the data property to any arbitrary object, there is kind of a thematic fit there, I would prefer to keep line 5. But I just wanted to offer explicitly if we can have all of the compatibility pain to V8 for the presence or absence of line 5, I would be perfectly happy to toss line 5. + +SYG: Sure. That sounds—I think if we find that the thing is not compatible, then we can’t do it. I don’t think V8 has any—to the current spec text as written, we have no objections on the intent or what is trying to do. If it turns out something is not compatible, we have to go from there. Maybe it’s simple as removing step 5, maybe it’s something else. Unfortunately I’m not sure how to figure it out without trying it and see. + +MM: That’s wonderful. I’m very happy. Thank you. + +CDA: We’re almost out of time. I’d like to get to KG’s topic, if possible. + +KG: Yeah. This is just pointing out that DOMExceptions only sometimes have stacks in some browsers, and because DOMExceptions are errors and they have the `[[ErrorData]]` internal slot and this would be a change such that if you do new DOMException in Chrome, it would have a stack. So I’m sure that’s fine. It already has a stack in Firefox and who is manually constructing DOM exceptions anyway? But pointing out this change would take place and would need tests and such. + +JHD: And we discussed this in Matrix. I’m happy to write the web platform tests during Stage 2.7 and ensure it returns the string and not checking its contents at all. If a browser that is currently returning undefined wants to return an empty string, go nuts. + +SYG: The way stacks are attached to DOMExceptions in Chrome it’s kind of done at creation time of the DOMException in C++ and depending how it’s created. I have even less of a handle on how DOMExceptions interact with this. Like, does it always then—all DOMExceptions get this, get a stack trace or does it sometimes return an empty string to begin with and then—yeah, I don’t know. We should work that out. + +JHD: And that sounds like something clearly as part of 2.7 that I need to make an HTML PR and that’s where I think we would work this stuff out; yes? + +SYG: I would, yeah, for 2.7 because I think that is a big part of this—because DOM exceptions are real errors now, yeah, it is important to figure out what the other browsers do in this case as well, and then on the HTML side, yeah, get reviews and get agreement on what we do for DOMExceptions. + +JHD: Okay. So I’ll note that as a requirement before advancing to 2.7. + +???: All right. + +JHD: And repeat my request for Stage 1. Seeing no objections, I will now request Stage 2. + +CDA: You had an earlier support for Stage 1 and 2 and +1 from Michael for Stage 2 as well as chip for 1 or 2. That’s a good amount of support. Are there any objections to Stage 2? Sounds like you have Stage 2. Reviewers. Looking for two TC39 heroes. + +JHD: I see Nicolo and anyone else want to review the stack accessor proposal? + +MM: I’m a champion and thus not a candidate. + +JHD: Michael, okay. I hear MF and NRO. + +### Conclusion + +* Consensus for Stage 2 +* MF and NRO are spec reviewers +* HTML integration PR must be directionally approved, and possibly merged, prior to stage 2.7 (and certainly prior to stage 3) + +## Intl Locale Info API Update in Stage 3 + +Presenter: Shane Carr (SFC) + +* [proposal](https://github.com/tc39/proposal-intl-locale-info) +* [slides](https://docs.google.com/presentation/d/14ColNEWDFlAnPGW6GSPSk6gbcdTmSy4pYuYXOwDlZX8) + +SFC: I’m presenting this on behalf of my colleague FYT who is out, who is feeling under the weather this week and asked me if I could present these slides on his behalf, so I’m happy to do that. So I’ll go ahead and walk through. + +SFC: I’m first going to give you a refresh of what is this proposal and then we’re going to seek consensus on normative PR number 83. First of all, this -- what is the expose locale information proposal. It’s a proposal to add additional information to the locale object based purely on CLDR to present information that’s derived from CLDR regarding locale specific preferences. This is not the same as the user preferences proposal that I know had has been, you know, raised concerns previously. This is just purely deriving information from CLDR based on a locale that had been offered into the API. It includes week data, which is one of the main motivators for this, which allows users and developers to do things like create a calendar widget, creating correct -- including appropriate information for the first day of the week and the weekend days. + +SFC: So here is the history of the proposal. It’s been coming for a while. It started at Stage 1 in 2020 and advanced to Stage 2 and then Stage 3 in 2021 fairly quickly. It’s been at Stage 3 for a while. Unfortunately, it’s gotten held up a couple of times with some of these fundamental issues that ideally we could have found earlier in the process, but we found them. The biggest change was getters to functions. I think previously we had, like, a getter on the locale that was, like, dot first day of week and that would actually run code, and then a decision that was made by the -- in Temporal, we adopted in this proposal was, no, we would actually have getters that start with the word “get” and have them be functions. That was a pretty big change we made to the proposal at Stage 3. And then we had questions about, well, how with does it interact with numbers and strings and things like that. So it’s not necessarily a great example of what should be happening at Stage 3, but it is what it is. + +SFC: And here the latest change we want to make at Stage 3 to this proposal. So I’ve been doing some research about week numbers. I think some people in this room have been subjects of this research. I’ve also done research amongst other researchers trying to my best to get non-techy users to give numbers about week numbers, and pace. Basically what I found pretty consistently is I have yet to find any regular person who has a specific expectation for how week numbering should happen in any system other than the ISO week numbering system. So currently, the proposal forwards data from CLDR about an outer an used to determine week numbers based on the first day of the week, and that would result in different week numbers in America versus Europe versus Asia versus Middle East. The only thing I have found evidence for is that there have been users -- people in Europe, including in this room, talk about week numbers, what is week 15 or something, of the year, and that number is derived from a formula that ISO-861 specifies, and when that user switches their locale, if they switch it from, like, Europe locale to in US, all of a sudden the week numbers are off by one, and that is quite confusing and sometimes misleading to users and actually causes real bugs. And that seems more compelling than, you know, the lack of any user that I’ve been able to find who has a different expectation for what the week numbering should be. + +SFC: So the proposal that is -- that has been adapted by CLDR which we’d like to forward the ECMA 402 proposal is that the week numbers will be always derived using the ISO 8601 algorithm, which is you look at the first week of the year that has a Thursday in it, and that is week 1 of the year. And that will be the same algorithm regardless of the first day of the week, regardless of your -- yeah. Regardless of the week, it would always be Thursday, and it determines the first day of the year, and as a result that involves removing the minimalDays getter, because that was the thing used to differentiate week numbering from locale by locale. So we no longer need minimal days, and that’s what issue 86 is for. And FYT listed some remaining open issues and there are some things that ABL filed. Anba is the champion that implements things in Firefox and those are totally resolved. This is not ready for Stage 4 yet, but this is the last major, like, actual shape change that I expect the see. There will probably be more. And I say that now and I’ll probably come back next meeting request yet another one. We’re getting closer and closer each time we do this, each time reredouse scope to finish this proposal. + +SFC: Implementation status is it’s shipped, which means that as part of adopting this change, browsers who already flipped this minimal days getter will have to stop shipping the minimal days getter. I guess they could keep shipping it, but it’s no longer part of the standard, and probably they should not be shipping it anymore. And there’s also polyfill. So the request is for consensus on the PR99, I’m also having to open it up. We discussed this at TG2 twice, both at the January meeting and the February meeting. And there was pretty strong consensus amongst the members of the TG2 that we wanted to move forward with this. You can see the change here. It’s all of the leading things. You’re deleting a bunch of lines and text. Everything is deletions. So that’s the pull request. And happy to entertain anyone who is in the queue. + +DLM: First of all, support the normative change here. I just wanted to reiterate the importance of the lack of fallback behavior, which is issue 76 you mentioned. This is blocking our implementation. We see it as pretty important for interoperability and between implementations and we’d really like to see this issue resolved as well as the other issues before this comes back for stage advancement. + +SFC: Definitely noted. + +CDA: There is nothing else on the queue. + +SFC: Is there no one here who wants to nitpick about week numbers, or can I just ask for consensus to move forward with PR 99? I have a hand. Can I go ahead and put MM on the queue? + +MM: Would the relevant part of the CLDR tables be called “WeekMap”? + +SFC: I love it. I love it. I love it. I think it’s called week data, but I love WeekMap. That’s a much better name. Thank you very much for that. + +CDA: Still nothing on the queue. + +SFC: Okay. I’ll ask more consensus one more time for PR 99. I’m going to say that we have consensus for PR 99. + +CDA: Okay. Do we have any objections? Sounds like you’re good. + +SFC: Thank you. I think that’s all I had for today, so we get some time back for the timebox. + +### Conclusion + +* Reached consensus on PR 99 +* Need to resolve remaining open issues such as issue 76 before the proposal can advance + +## Stabilize integrity traits status update + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-stabilize) +* [slides](https://github.com/tc39/proposal-stabilize/blob/main/stabilize-talks/stabilize-stage1-status-update.pdf) + +MM: So this is a status update for stabilize, and I added the subtitle “hopes and dreams” because the nature of the status update is where we hope we can take this proposal, this set of issues with this -- what this is about, but we don’t know yet if it’s possible. So I just wanted to explain where we’d like to go and hopefully get feedback from people, both here, both about whether it is possible and if it is, whether this direction is attractive, and how people feel about this direction. + +MM: So integrity traits are not something that everyone knows well, so a little bit of recap. We’ve got right now three integrity traits in the language, usually referred to as integrity levels because they’re, you know, linear hierarchy, frozen, sealed and non-ex-extensible, on the left we have the verbs, in the middle of we have the states, and on the right we have the predicates. And the thing that I’m taking as the defining characteristics of something being an integrity trait is that it’s a monotonic one-way switch, once frozen, always frozen. It’s stronger object in variance. When an object is frozen, you have more guarantees that enable you to do higher integrity programming with less effort. It implicitly punches through proxies, or rather the integrity trait status is transparent through proxies. If the target has integrity trait X, then the proxy does and vice versa. If the target is frozen, the proxy is frozen. If you try to freeze a proxy and the proxy allows the operation, then both proxy and the target become frozen. + +MM: Okay, in addition, there’s the crucial distinction between explicit versus emergent. There has to be two proxy traps per explicit integrity trait, so there’s a prevent extensions and an is extensible proxy trap. There is no freezer seal proxy trap, because those are simply a pattern of other guarantees that either hold or do not. They’re implied. + +MM: So without going through all the detail I went through last time, I went through this taxonomy of all of the separate atomic, unbundled integrity traits, each of which address some particular problem that can be addressed by integrity traits and that we believe are motivated. And the important thing about this taxonomy in terms of the parity of the bundled thing is it allows to us go through them and see what we’re talking about, what each of these useful guarantees each of these provide. + +MM: So fixed mitigates the return override mistake, the use of the mechanism and classes to stamp objects with private properties. If the object has been made fixed, if it has the integrity of a fix, then the idea would be that the use of the subclass constructor stamp the private property on it would instead be rejected with an error. And this is, in particular, motivated at Agoric for virtualization purposes, and there’s a lot I can about that if people are interested, and it’s in the shared structs working group, because they want a fixed shape implementation of structs and that conflicts with the way V8 implements the stamping of private property, private fields, so fixed would address -- they would make all structs fixed to -- so they could all benefit from a fixed shape implementation. + +MM: So after the last presentation, we got this issue filed by Shu, which is V8 prefers normatively changing non-extensible to imply fixed, and we are owe -- all the champions are overjoyed with this idea. This would separate it out from this proposal from new integrity traits that would simply bundle it into non-extensible. V8, to my understanding, is already doing measurements to find out if it’s feasible and already getting some small number of negative results. I don’t -- we’re hoping that the judgment is that we can still do that. Shu, let me just break process and ask you. Do you have any information updates from the V8 measurements about whether you still hold out hope that we can do that? + +SYG: I pasted the link in Matrix to the use counter. I can’t believe it is not zero. It is a `e-7` or something. So it is still more than I would like, but we don’t really have a hard rule for, like, how small something has to be. I think this is few enough axes that it would be worth trying still. Pending, you know, further data. Unfortunately if you look at the slope of the graph, it is not flattened out yet. But this might be an artifact of just how the visualization is and how we’re getting data, because it’s been, like, a little bit more than -- a little bit less than a month since this hit stable, so we’re still getting more data as it hits a bigger population. + +MM: Okay, thank you. That’s very clarifying, and it means that there’s still hope, which is the most I was hoping for at this stage. + +MM: Okay. Next is overridable to mitigate the return override mistake, which is well illustrated -- I’m sorry, to mitigate the assignment override mistake, which is well illustrated by this sample code if some prior piece of code freezes object prototype, then there are many, many old libraries, specially those written before classes, that do things like use a function to create what is effectively a class point and then assign to the toString property a prototype in order to override the string method if the prototype has been naively frozen, then this will throw. This has turned out to be the biggest deterrent for high integrity programming in JavaScript. The biggest deterrent for freezing all of the intrinsics. If there were an overridable integrity trait, then by making the prototype objects in particular, but all of the primordial intrinsics and others overridable, then the deterrent would go away and these assignments would work. And there’s been controversy about whether to call this the override -- the -- a mistake, and I just want to point out that in all of the years that we’ve been going around this, we have found code that breaks if this is fixed globally for the language, which I’ll get into in a moment. But that is for an accidental reason. We have never encountered code that makes use of this aspect of the language on purpose. Let that sink in. + +MM: So after the last presentation, we got this issue filed by Justin Ridgewell clarifying what the history was on the prior attempt. And there was only one breakage observed, and it I was very narrow. It was an in old version of the lodash library, and even though it’s already fixed in modern lodash, we can never erase old versions from the web. And it had to do with the toString and toStringTag I fixed specifically on TypedArrays, even though it could apply in theory to other two strings that depend on the two string tag behavior, among the primitives. And, JRL, if you’re within earshot, please correct anything I’m getting wrong here. But what JRL is proposal is it’s still feasible to fix this globally for the language by having the global fix make a cutout specifically for the two string and two string tag properties that cause the old version of lodash to go astray. And the cutouts that JRL is proposal and the similar, but somewhat different cutout that RGN has proposed, both of which have all the safety properties we need. It’s just a little bit of ugliness, but it would let us fix this globally in the language. We would love that. We would be overjoyed if that could happen rather than addressing it through an integrity trait. + +MM: Then there’s the non-traffic, which is another re-entrancy hazard problem. This is re-entrancy through proxies. When you do -- when you have a proxy that looks like a plain data object and survives all the tests that you might think to apply to it, as to whether is a plain data object, it might still be a proxy, but does things synchronously during the handler traps, so you would like to be able to write code like this. And I want to recall our records and tuples, which is a presentation that is coming up later this meeting. Records -- something that you could test whether something was a record or a tuple. If it is a record or a tuple, you knew it -- it had no behavior, it was a plain data object, it could not be a proxy, so it would be very nice to, you know, sort of -- so that we could have tested that in an early validation check, such that once you’ve passed the input validation check, you can use the validated to be plain data objects inside your function while invariants are suspended, knowing that you’re not going to be turning over control synchronously to any foreign code. We don’t have records and tuples, so we’d like to be able to create a predicate that we call record like. But because you can’t write a predicate today that will verify something is not a proxy, you can’t actually write a predicate that protects us from re-entrancy. + +MM: The idea is that by applying the new integrity trait, stabilize or non-trapping, and then having the record-like predicate check that something is stable, then that verifies that even if it is a proxy, that proxy will never trap to its handler. And, there by, even if it’s a proxy, you are safe against re-entrance had arts, and we’ve done that safe by making observable whether it’s a proxy or not, so this approach to the reentrancy hazard of handler traps does not violate proxy transparency. It just makes the existence of proxies that claim to be stable harmless because we now have a guarantee that they cannot re-enter. + +MM: At Agoric, we actually have a shim, I believe it’s a fairly complete shim, for non-integrity trapping trait for itself. It was kind of a surprise once we thought about it that was possible to shim this faithfully and safely within the language, but it has one of these big have-to-run first burdens, which is it does it by replacing the global proxy constructor, and the only -- it only has the safety properties it claims, if it has replaced it globally and the adversary cannot recover a normal proxy constructor by other means, such as by creating another realm. So it’s quite burdensome to maintain the safety. But it is possible to shim it, and not only have we shimmed it, we now have a bunch of code that makes use of the shim as it was intended, you know, for the safety that we intend, and it’s been an interesting learning experience to see how to use it for safety we intend and how much disruption there is to other code that’s concerned with these properties. For code that is not concerned with this safety property, there should be no burden at all, because nothing otherwise is no compatibility break. + +MM: Okay. Finally, there’s the unbundling of non-extensible into permanent inheritance, which both the browser window proxy has without being non-extensible, and object prototype has without being non-extensible is through magic, they refuse to have their prototype be changed. And then with that taking care of by one side of the unbundling, the remaining side from the extensible would be no new property, so you can imagine the separate -- two separate explicit integrity traits such that prevent extensions or non extensible becomes emergent from those new explicit ones. + +MM: Okay, so having recapped all of that, what we’re hoping for is, first, all of the champions of stabilize and SYG—and SYG, I know you’re on the call, so please correct me if I’m mischaracterizing anything—we all are of the opinion that even though there is a nice orthogonality from a purist point of view in unbundling non-extensible and it allows us to retroactively rationalize this behavior of window proxy and object prototype, it’s just not worth it practically. So we hope not to unbundle non-extensible, leave those two properties bundled into non-extensible, non-extensible goes back into being explicit, and the result is that we cannot faithfully emulate the browser global object or the object prototype object, and we’re willing to sacrifice that faithful emulation even though it’s a compromise of virtualization, because practically, nobody will care. + +MM: So first of all, Shu, did I characterize what we agree on there correctly? + +SYG: Yes. We also prefer that non-extensibility remain bundled. + +MM: Right. Thank you. And the next one is the one that we’ve already mentioned that SYG expresses in that filed issue and that we talked about that -- with the current usage counters we still hope to do, which is also to bundle fixed into non-extensible, so there no separate fixed integrity trait. Then overridable, we’re hoping that JRL‘s strategy with either JRL’s carve out or RGN’s alternate carveout, we’re hoping that will enable us to go back and fix it globally for the language without breaking the one case question we know about in lodash, so that goes away. Now, all we’re left with is non-trapping, so non-trapping would end up just getting bundled into stable, and now stable becomes explicit. So this is the picture we’re hoping for, having taken care of the others by either choosing not to unbundle or by dealing with them by other means. + +MM: And I want to acknowledge a political reality that is just there, which is somewhat unpleasant for us to realize as the champions of the proposal, if our hoped-for resolution happens, then stable addressing the re-entrancy is addressing something much narrower than the original overall stabilize proposal. And, therefore, there’s less wind in its sails. We understand that and we understand it’s more than an uphill climb to advance and get consensus through the stages, but it’s the right thing to do, so we’re taking that hit and hoping the others can move forward by themselves. + +MM: I want to take a moment for a little bit of historic context. In 2010, 15 years ago, BE did this presentation with input from myself and Paul, and proxies are awesome, in which he presented what were our plans for the time for a non-trapping behavior of proxies. So just flipping through the slides for a final transition, so this final transition going from trapping to fix, notice that in the fixed state, the handler in the blue circle above, gets dropped because the proxy will no longer trap to the handler, so there’s not even any reason to continue to hold on to it. So the current non-trapping is very much in line with our original intention here, although certainly many, many of the details have changed over the 15 years since we first talked about non-trapping. And then one reason I brought that up is that if we get our hoped for picture, then we could also go back to the original name and call the non-trapping integrity trait fix, because it’s nice and short and something that’s fixed is not broken. + +MM: And at this point, I will take questions. And I’m sorry, at this point I will stop recording, and RPR, please stop recording as well. + +CDA: Okay, Justin. + +JRL: Can you go back to the slide that has my comments about integrity state, the override mistake. + +MM: Yes. + +JRL: Yes, this one. So to clarify something you said during your presentation, you mentioned both toStringTag and toString. + +MM: Yeah. + +JRL: So this carveout I’m trying to highlight here doesn’t require any changes to toStringTag. It only requires a change to two string, and it requires the change to create new data properties if we’re overriding. The change to toString here is specifically to support an old version of lodash that checks for these explicit fields -- classes, and if the toString method returns the appropriate result, then it will not use a bad implementation of toString that it has directly written into this old code base. If we patch `toString`, that means it will continue to use a good version of `toStringTag`, and that will hopefully fix everything. So the two changes we need here are, one, if you do the override misfact, it creates a brand new data property with configurable true on your own object, and then it also -- when lodash specifically tries to do this, it will then check to see if it should use the good `toStringTag` and/or the bad `toStringTag` and does this by checking `Object.prototype.toString`. And if we can trick lodash into doing the good thing, hopefully everything is fixed. + +MM: Great, thank you very, very much for that clarification. I’m very glad you were present for all this and that we’re able to clarify. Is the -- are the classes in question that lodash actually trips over, all of the questions -- the classes mentioned on this slide are only the TypedArrays? + +JRL: Sorry, the -- what about TypedArrays? + +MM: So my impression was that lodash was only tripping actually on this -- the -- on this issue with regard to TypedArrays, and even though it applies in theory to an observed behavioral change for any class for which, you know, any intrinsic class for which the two string behavior is sensitive to `toStringTag`. + +JRL: Yes, this is the other fun part. There are lots of methods that use the implementation of broken toStringTag, but there’s only two that are broken, is TypedArray and is ArrayBuffer. The thing here that if we trick lodash into using the correct toStringTag implementation, then it will fix TypedArray, is TypedArray and is ArrayBuffer. But it’s not actually anything to do with the typed -- the data values that are returned by toString when it’s called against a TypedArray or actually is an ArrayBuffer. But the classes that I highlight here, it checks each one of these to make sure that it returns the correct data view ArrayBuffer when called against a data view class or the ArrayBuffer class instance that. + +MM: Great. And, all right, let me just get your opinion. Are you in favor of us doing this exactly as you lay out, globally for the language? + +JRL: Yes. I think the second carve out here is totally appropriate as a back come pat thing we can just do, and the first change here is implementing for override mistake, and I think those are both appropriate. + +CDA: All right. I just want to note we have less than one minute technically on this topic, and there’s a big queue. First up, DE. + +DE: Do we to extend the timebox by 20 minutes. + +MM: I’d appreciate extending the timebox, on the other hand, I’m not asking for stage advancement, and I don’t want to crowd out things that are asking for stage advancement. We could continue this late for that’s more appropriate. + +DE: How is scheduling going? + +CDA: So the issue is, it’s not really an issue, so we have time on paper. The issue is that we are now full through the end of today. And we were full -- actually, no, we have 45 minutes available tomorrow before lunch. So, yes, never mind. 20 minutes? + +MM: That would be great. + +CDA: Okay, let’s just go to the top of the hour and then we’ll do the mid-afternoon break. Does that sound good? + +MM: Sounds good to me. + +CDA: All right, DE. + +DE: Okay. How do we want to, you know, check if this is web compatible and roll it out in browsers? I think we’ve had enough kind of failed experiments where we ask browsers to just ship something that I don’t know if we have that kind of interest in this one. But I would also be interested in hearing from browsers. I’m not really sure how to phrase this as a use counter or something. + +MM: Yeah, that’s a great question, because if the browser pays the cost, and I want to acknowledge, the costs are substantial to do one of these use counter exercises. If no browser is willing to pay that cost and try and experiment, the rest of us are helpless to advance this, because we simply can’t advance it without that data. + +DE: Do browsers have any thoughts here? + +SYG: I’m staring at the thing. I’m not sure how to write a use counter to test whether the change will be compat or not. What would you test? + +NRO: The counter in object protector string that checks for this these type of classes listed here. You have a custom symbol to string installed. + +JRL: Sorry, I’m going to butt in here because I remember the details from the lodash thing. If we -- oh, my God, now I don’t even remember. The error -- I think Mozilla implemented an error tracker that tried to see if there was a change. + +DE: Oh, Mozilla did some work in this area? + +JRL: Yeah, for the original issue, but this was six years ago. Someone implemented a use case tracker. We found the pages because of this, what triggered it. This Oberle throws an error in the page that was broken, and I don’t know how Mozilla did that. + +SYG: If we can counter it to a telemetry, if it’s not in a counter, if it’s an `Object.prototype.toString`, that’s probably fine. + +MM: SYG, I want to express my deep appreciation for your willingness to do that. Thank you. + +KG: I just wanted to express support for fixing the override mistake if we can possibly can. That would make a lot of things much better. And I wanted to hear from browsers if they had any interest in this, because it sounds like they’re at least willing to explore some of these changes. I don’t know, so the toString change would be a separate change from, like, outright fixing the overhead mistake. + +SYG: Like, wait, so my understanding is -- number 2 there, the only reason that exists is to work around the biggest known user of the override, the biggest known, like, dependency on the override mistake so we could do number 1, which is fixing the override mistake, which should be compatible because it changes a throwing behavior to a non-throwing behavior. + +KG: That’s right. + +SYG: That correct? Okay? + +KG: Yeah. So if we have some appetite for that, I would be very excited. There’s lots of things that would be better if we can do that. + +NRO: Yes, I’m really pleased to see how the proposal is becoming smaller and smaller, so excellent, Mark and -- + +MM: I’m sorry, I couldn’t quite hear. + +NRO: I’m really happy with how the proposal is becoming simple and simpler. And the first time you presented it, it was difficult to keep track of the parts, so this is a very welcome change. + +MM: You’re welcome. I appreciate it. + +KG: This is with regards to your point about the sort of political reality of it being less motivated. We’re doing less stuff. I think the non-trapping check is sufficiently motivated on its own. Lots of code needs to care about is there any possibility of this triggering user code. That’s one of the main things you need to care about. And being able to actually assure that I think is a sufficient goal on its own for our proposal. + +MM: Thank you. + +WH: “Fixed” prevents you from attaching extra properties to objects, but if an object is constructible, then you can subclass it and do it that way. Have you looked at anything which makes objects non-subclassable? + +MM: The existing precedent for this is that the browser windowProxy is effectively fixed as a special case, and it does not do it by preventing subclassing. It does it by rejecting the addition of private fields. So our inclination, even though we can’t retroactively rationalize the browser windowProxy because of the unbundling of prevent extensions, our inclination is still to follow the precedent of windowProxy just for uniformity and the fact that it’s adequate for what we need. The first thing I thought of was actually along those lines, to prevent the return override from returning a fixed object. But I think the precedent is actually better place to put the error check anyway. + +WH: I’m not talking about the return override. I’m talking about just defining regular subclasses. + +MM: Oh. No, this is -- that’s not something I’ve ever thought about. That’s a new -- please talk that through, how this -- + +WH: I’m just curious if you’ve explored ways of creating objects with a fixed shape, which cannot be subclassed. + +MM: No, I have not. + +SYG: If I can interject here, Waldemar, the two kind of -- so the way the structs proposal deal with that is something that’s the -- that we’re calling “one shot initialization”, where you can declare a struct to be a subclass of another struct, and when the instance is created, it immediately gets all the declared fields before it ever escapes back to user code. There is no way to observe, yeah, the intermediate state. So it’s unclear, like, that is an integrity trait or level that can be on an object. Like, that feels more like of the class or the struct declaration than on instances. + +WH: Yeah, clearly subclassing is less harmful than the return override because, when you construct it, you know that you’re constructing the derived class. But I’m just wondering if there was any exploration of making constructible objects final so that they cannot be subclassed at all. + +MM: Yeah, I’ve never thought about that. + +WH: Okay. + +DE: I’m just trying to understand what this feature is concretely. You’re saying this is stronger than frozen, and in particular, a frozen object might be a proxy that although everything is there in its target, it still has some side effecting code when traps are hit. + +MM: That’s right. + +DE: And this fixed something is just like a plain old data object and you have an API of taking an object and getting out a version of it or modifying it in place such that it is in this fixed mode, or what is the a actual API? + +MM: So fixed would imply frozen, so it has to be at least frozen for it to be fixed. The addition of it being fixed is that essentially a proxy on a fixed object is itself fixed, and the behavior of the proxy is identical to its target in all ways except for -- that it has a distinct object identity. + +DE: Okay, so when you have a frozen plain old object, it just -- it already is fixed or it isn’t? + +MM: No, it’s too late to change the behavior of frozen. + +DE: How do you create a fixed object? + +MM: It would be by adding new -- just like we have the existing verbs, object and reflect -- object freeze seal and prevent extensions, there would be an object fix or whatever the word is, stabilize, that would be a new verb. Because it’s a new explicit integrity trait, there would also be a `Reflect.fix`, just like there is currently a `Reflect.preventExtensions`, and the result would be that it would cause the object to be frozen. In other words, because it implies freezing, it would first try to do all the freezing. If that succeeds, then it would additionally tag the object as being fixed. And then the proxy implementation, when it sees that its target is fixed, it bypasses the handler and simply does the default behavior as if that handler trap had been omitted, directly applied to the target. + +DE: Okay, so `Object.fix` on a proxy will then just forward to the target and there will be no proxy trap? + +MM: If you do the verb on a proxy, to -- that has a target that is not yet fixed, then the way to understand it is by analogy to what happens if you do a `Reflect.preventExtension` onto a proxy whose target is not yet non-extensible. It will trap to the -- in the case of prevent extensions, it will trap to the handlers prevent extensions file, and the -- that prevent extensions trap can throw refusing to make the object non-extensible. + +MM: Likewise, the fixed -- if you do a fix operation on a proxy whose target is not yet fixed, then it traps to the fix trap on the handler, which would be a new trap, and there’s a subtlety there. I forget to mention the subtlety. The subtlety is that if you omit the prevent extensions trap from a handler, then the default behavior is to do the prevent extensions, not to refuse to do it. Because we’re introducing this to a language that has a lot of install base prior to this feature, the way to do this, which was anticipated in discussion and then turned out to be a big deal in actual use that we found, is that if you omit the trap from the handler option, the default behavior is to refuse rather than to proceed. + +DE: Sorry, you’re talking about the fixed trap or the prevent extension? + +MM: I’m talking about the fixed trap, so the fixed trap would have the opposite sense in terms of how it defaults to prevent extension, but by providing the trap explicitly, you can get it to either accept or refuse explicitly. + +DE: Do you have a use case in mind where a proxy would want to refuse to be fixed? + +MM: Yes, yes. So the -- the big one is the legacy, which like I said is something we immediately encountered when we started to use this in practice even on our own code in places we didn’t anticipate. There turned out to be a lot of use of proxies for which the proxy was simply implementing trap behavior for purposes of doing essentially a little behavioral DSL, and it didn’t actually care what the target is at all. And if you -- and for those proxies, the target was just an empty plain object. It could be frozen or not, doesn’t matter. So if somebody freezes the proxy or freezes the object because the handler doesn’t care, the handler only has specialized traps, usually for property lookup or method indication, like I said, for a little behavioral DSL, but if you expose that proxy and then somebody fixes it, if the default behavior for old code is that the proxy gets fixed when you try to fix it, then the handler behavior that implements the DSL is turned off, so you cannot share among mutually suspicious parties a proxy that implements that DSL behavior without doing something weird to protect -- basically without adding an explicit trap handler to refuse to be fixed. And old code doesn’t know to refuse. + +DE: So I’m wondering, what if instead of a trap, it were just like a Boolean? Like, if you put in the options bag fixable true, then it becomes existing -- it adopts fixable behavior where it fix it recursively. If you don’t, it will refuse. Do we need more behavior aside from refusing and not refusing? + +MM: So it’s -- I mean, with regard to what we actually need, I don’t know. We certainly have not done -- have not coupled -- we have not written traps to do anything other than proceed or refuse. So I don’t know what we need. But the asymmetry, the asymmetry and non-uniformity, if you do provide an explicit trap with regard to the prevent extensions trap, I’d rather follow as much of the precedence as we can, but still acknowledge that having the opposite defaulting behavior to deal with legacy is just -- I think that’s sort of the minimal non-uniformity that takes care of the issue. + +DE: Okay, so, yeah, overall, this proposal makes sense. It seems reasonable. I like the idea that separating out the, you know -- override the state fix part and separating out the frozen objects always refusing private fields part, those are -- if we find a way to do those, that’s great. And then this can service the very small kind of it’s really a plain old data fixed frozen object permitted, which is -- yeah. + +SYG: I want to recap my understanding of next steps here. So my understanding is that the kind of the linchpin is partially this slide is the web compat question of can we fix the override mistake globally in the language. But I want to recount the three open compat questions and please correct me if there are more. Number one, can we change non-extensible to include no private field sampling? This is in progress, we have a counter point to yes. Number 2, can we change toString to work around that old lodash version? This is unclear. I hope somebody takes an action item to try to craft a use counter and communicate that to me and we can maybe check. That’s number 2 here on this slide. Number 3 is something we’ve been discussing in the Matrix, to actually change the behavior of the over ride mistake, it’s changing from a throwing to a non-throwing behavior in strict mode only in sloppy mode, it will be changing a silent no-op to a different behavior. That is much more risky and I’m not sure how to craft a use counter for it at all. Assignment to a property that has a non-writable same-named thing up its prototype chain, it doesn’t tell me whether the application is broken or not. It’s a silent no op today. I have no idea if you change to respect that assignment, if the tpage changes. That seems just extremely hard, if not impossible to figure out without shipping. Even if number 2 pans out, how do we hope to change the sloppy mode behavior and if that is a deal breaker for the current taxonomy that MM has—that is nice is simple? + +SYG: Is it a deal breaker? + +DE: I have a clarification first. You were saying number 3, you would be able to figure it, but my expectation would be that you’re going to get hits on it. + +SYG: That's what I am saying. I will get hits, but whether the hits and the page breaks. That’s what you want + +DE: Is there a way of—or which kind—the name of the property that is—and in this case, whether we trigger this case on symbol toString type, or on some other cases? + +SYG: They would have to be hard coded. And even then, I am not sure. Because that’s like an assignment path, that is usually hot. But to—the short answer, the easy answer is basically no. Like, you can imagine use counters to be a single bit. We don’t track any other information the the URL information you see on the public site are cross-referenced with archive to see the have the use counter. It does not include any additional information about anything. Like you can’t attach any other information. It’s just a bit. + +MM: So with regard to your question about is it a deal breaker? The—for everyone invested in HardenedJS, Agoric making it all the other companies using it, one of the restrictions of the HardenedJS is only strict code. One of the things we do in the shim on initialization, and that excess does by other means is simply completely throw out sloppy mode. For doing something very bad and weird for sloppy mode is not, it’s extremely weird. It would—it’s hard to imagine what the corresponding reflect of that set behavior would be to—because `Reflect.set` wonders whether it’s called by strict or sloppy code. The—I would find it very, very bizarre, but if that’s the price of fixing the assignment overwrite mistake globally for the language, for strict code, I would pay the price. I can’t speak for everybody else. + +SYG: Yeah. You’re not asking for stage advancement but this is something to get consensus on, because I think it’s—it’s pretty hard—the taller task. It’s a bigger ask for the browsers to check if it’s compatible for sloppy mode because I don’t know a way to build assurances ahead of time. Perhaps one of the other folks from the other browsers can have an idea. That’s my biggest worry right now + +MM: Justin, if you are still on the call, do you have any thoughts with regard to Shu’s question? + +JRL: I do not know how to craft this so we can automatically detect it + +MM: Okay. + +JHD: Yeah. So the answer to these questions should probably be given off-line but things that has occurred to me, if you stabilize a promise, that is pending can it ever resolve? If not, when it finally attempts to resolve and reject, where does the error go + +MM: Fixed does not mean by itself that it is safe from re-entrancy. Or that it does not have mutable internal slots. The key thing on the code that protected from re-entrancy, the object was stabilized or fixed, and then you apply an isRecordLike predicate to it where the is record, and make is exit. Is record-like predicate us at anytime rates the properties and make sure all of them are data properties. And checks that the object inherits from `Object.prototype`. And checks of course that the object is fixed, which implies frozen. So if all of those are true, at that point, the object seems to be a plain data object that is safe from—[bau] those additional checks are needed. + +JHD: Okay. And then the last thing I had on there, regular expressions, which I believe will throw if you try to do anything because everything tries to set last index. And possibly a lot of link—array operations which try to set links. This may already be a problem for freezing. But it’s probably a nice thing to do to audit all the built-ins and confirm that the results of stabilizing or fisting them or whatever are expected. + +MM: Okay. I will take that under advisement. I don’t have an immediate reaction. + +JHD: Thank you. + +### Speaker's Summary of Key Points + +* We agree that unbundling non-extensible into more primitive integrity traits is not worth the cost. +* We hope that the “fixed” integrity trait can be bundled into non-extensible. Google has usage counters going which should help us decide if we can. +* We hope that the “overridable” integrity trait is not needed and we can still fix the override mistake globally for the language, with safe narrow carve-outs for the legacy lowdash case. Would need a major browser to measure to decide. Google may do so (yay!) +* These usage counters may only be able to measure strict mode behavior. If so, we agree we could make this fix only to strict mode, leaving sloppy unrepaired. +* If all this turns out as we hope, we’d only have the “non-trapping” integrity trait left, to be bundled into the root trait, currently called “stabilize”. +* With “fixed” no longer taken, we could rename “stabilize” to “fixed”, which was its original name circa 2010. Though we should not spend energy bikeshedding until we know if this is even possible. + +### Conclusion + +* We do not unbundle non-extensible, even though that means a loss of virtualizable (in a corner case no one will care about). +* Only a major browser can measure whether we can bundle “fixed” into “non-extensible”. Google already has usage counters going to help us decide (yay!). +* Only a major browser can measure whether we can fix the override mistake globally for the language with carve-outs for the narrow exceptions we find (only legacy lowdash so far). Google may do so (yay!). +* If all this works out, we only have “non-trapping”, which becomes the root trait whose name is TBD (“stabilize”? “fixed”?). +* “non-trapping” would address a major source of reentrancy hazards via proxies, without threatening proxy transparency! + +## Records and Tuples future directions + +Presenter: Ashley Claymore (ACE) + +* [proposal](https://github.com/tc39/proposal-record-tuple) +* [slides](https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/) + +ACE: Hi, everyone. It’s been a while since we talked about Records and Tuples. Almost a year. I thought it would be a good one first to chat about it again. So it’s technically a Stage 2 proposal, but let’s not worry about that too much. NRO noticed that whenever a TC39 proposal makes it on to hacker news, it’s almost inevitable that someone in the comments will be asking about the status of records and tuples. And also, just on the actual repo itself, people keep asking "what is the status of this?". And I am not sure what that status is exactly. So I am going to present some ideas today, I am not proposing this for Stage 3 today. I am more trying to just encourage discussion. Especially from people—from every one of course, but I would love to encourage new voices in this area as well. Because we have been talking about this for four or five years now. And new voices are always very, very welcome. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g334e668a325_0_0] + +ACE: So to catch people up, that haven’t been following kind of all the things over the years, for many years, the proposal was all around adding new primitives. And they had === semantics and typeof that included comparisons here where there is special handing for negative zero. And then we thought we were kind getting ready for Stage 3, we were kind of only changing little bits of the proposal. These fundamentals things had not changed for a long time. And these fundamentals actually turned out, there was not appetite to do them. It’s just—for various reasons. Since then we have been back to the drawing board on "what can we do here?". Because I am convinced we can do something. I think this is something that is lacking in the language. And I am sure there is something, that we can do. I just don’t know what it is yet. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2af82517ce6_0_26] + +ACE: So a thing I like, so I went off out in the British desert and meditated, if was up to me, like a special birthday present, ACE you get one freeway design and put it to Stage 4.7, what would I choose? + +ACE: I would choose that there is syntax. I will comment in more detail. I like the syntax. Maybe it doesn’t have to be this sign. I really think we are running out of ASCII characters to be many other things. Initially, it was a bit of a blow, when we were told we can’t have new primitives, but I am come around to that. PHE said putting the implementation complexity aside, as a JavaScript user to would be confusing to have values that look like objects and arrays but are not, I have come around to that, yes, these things should be objects. I think they should be general containers that can contain anything. And I think it’s a great opportunity that they also work as a composite key. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2af82517ce6_0_39] + +ACE: So syntax. So yeah. There could be no syntax. To say if there is syntax was just about freezing objects, which is there is like set proposal for syntax that is just about freezing and sealing. Then I think there are—the advantages for lots of different actors here, I think reading it, at least, I don’t know how people’s brains work. But when I am looking at this, just the view of parentheses, the noise around the data makes it much more readable. It’s fairly less characters to type. But I think it’s beneficial for the reader, there’s more weight on the reader and the writer. And a tooling team, I like this from a tooling perspective. It’s much easier to just analyze this code and see this is a frozen object. It’s not going to change. As opposed to tracking. If `Object.freeze` is being used, tools, I think Rollup, special cases, calls to `Object.freeze`. If there’s enough indirection, that breaks down, creating a `deepFreeze` utility static analysis tools can’t pierce through that to realize what is happening. Syntax gives you the guarantee that no monkey patching can happen. There’s another advantage, which kind of only makes sense later on. I think there is potentially some kind after runtime advantage. But that only makes sense once I get to the later slides. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_0] + +ACE: An example of what I was just saying, say you are going to be using this syntax for certain global configures, exported from a module. What we can do, as a human and as from a tooling perspective, is I know what those values are without having to see other modules in the universe and check every module that imports this value doesn’t mutate it. That’s a really nice property that I can be sure immediately that this thing is frozen. Again, I think that’s good for both of us in reading this code. I am going to comment on the PR, you are exporting this config. It could be weird if someone somewhere else, someone mutates it. It’s nice to freeze it. The convenience of this encourages these types of patterns. It gives humans like convenience. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g337bd48536b_3_0] + +ACE: But syntax isn’t crucial for this. Like, if there’s really no appetite for syntax, then we don’t have to have it. It could be APIs. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2af82517ce6_0_21] + +ACE: So I also, as I said, I have really come around to these being objects because again the point that PHE made was, if you are going to adopt these things, there’s already a lot of code out there that is kind of sniffing, you’re going to reflect the properties of values in the language. You might have some utility that is overloaded. You can maybe pass it a number or array of numbers and the way it works out the overload is using `Array.isArray`. And now, if we have these tuple things, like arrays but they don’t pass `Array.isArray`, then that utility no longer works. So you would have to update or not use this thing. So if these things are objects and they are arrays and they inherent from the prototypes that others might expect, there’s a larger chance that you could just adopt these. I think there’s also a benefit to people learning the language, the typeof is something tends to be something that immediately, like, Chapter 1, when you open the book, these are the core parts of the language. More like, think, immutable data structures are really, really, really useful and something you want to learn about sooner than later. I don’t it’s necessarily Chapter 1, page 1 concepts. But overall, I think I will like them being objects. The reason they weren’t objects in the proposal is because it helps explain from a kind of modelling aspect of how the language collects some other behaviors. But I think the weighting is actually crucial in the matrix. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2c6eebea946_0_5] + +ACE: So the other part about the proposal is that it was kind of always about we found the original commits. 2019, and back then, it was—these things are deeply immutable. The only thing you could put in them were other Records and Tuples and primitives. And again the feedback we got on that was that’s great. But it really cuts off a lot of language. Like, maybe for designing the language from scratch, we could do that. But the ship has already sailed. Pretty much everything in the language is mutable unless you do work to stop it being that way. And I just felt really sorry for the people that are going to use the new things we are building, like Temporal, which is fantastic, and has an immutable data model. So if someone thinks "I am using immutable data", these aren’t like old school dates, where they are internal mutable and you can change what type they point to. Everything in Temporal, data model is these things are immutable, but still actually mutable objects. You can add new properties to them. From like a perspective of records and tuples, the thing has to be frozen or stabled or fixed, like from MM, then it would cutoff large parts of the language, even things from a user’s perspective, do follow the roll of being I am mutable. And I think maybe they’re still ways that, you know, linting tool might be spot little mistakes where people have immutability. We talked about having a box like object that lets you opt out, you have to code into the data model and makes it much harder to adopt. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g337bd48536b_3_6] + +ACE: So my preference is actually to weaken this. I was trying too think. We could maybe say these things are deeply mutable and like what else in the language would that kind of correspond to? In some ways, that could maybe correspond to, like, the shared structs models, shared structs can contain structs and shared arrays and most primitives. If we go in that direction, we model things on shared structs, so there’s cohesive ideas. I don’t want to do that if—the committee felt like that, and I am the only one thinking we shouldn’t do this. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g3315cd10b42_0_0] + +ACE: So yeah. There’s a tradeoff. And I am on the—I think overall the flexibility is worth them only being shallows. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_19] + +ACE: So part of me was wondering how much that fills up TCQ and I was going to stop here and drain it down. So moving on to the something that is more—the next bit, I think adding—doubles are great from giving developers really ergonomic access to immutable data structures. But they seem—Waldemar in the queue. I would like to hear about his point right now. + +WH: Regarding the previous Venn diagram you had, as far as I can tell, the records and tuples are ad hoc. It would be useful to be able to create immutable structs. How would that work? + +ACE: Could you say—I think yes, it would—like, I think I agree. It would be immutable—rephrase the question + +WH: The syntaxes you have shown all create objects with arbitrary shapes. The thing about a struct is, if you know the type of a struct, you know its shape. + +ACE: Yeah. So I think – + +WH: How would you create immutable ones which are actually structs? + +ACE: One option, you think about these things as anonymous structs. You could create structs like evalling, creating structs on the fly. And that I think already works in the shared structs proposal. You could imagine it’s similar to that, when you create the record, that is defining a struct that has those builds and it’s fixed. Or other syntax + +WH: As a user I have a dilemma. Let’s say I want to create immutable Points. I could either write, #{x:4, y:-17}, which gives me immutability but doesn’t provide a shaped type. Or I could define a struct Point and create instances of it, which gives me a consistent shape of all Points but does not give me immutability. Some people will choose the first one. Some people will choose the second one. We’ll end up creating a stylistic schism with religious wars on the boundary between them. + +DE: I put myself on the queue. ACE’s thoughts that all records and tuples are structs already. Whether it’s a shared structs, you know, at the time of creation of one of these records and tuples, it will be already stable, whether or not each of the things inside of it is a shared structs compatible are or not. It is shared—your record and itemize is shared if all of the things inside of it are shared. It becomes shared structs or non-shared structs. Then the—we still at that point, do have the false coupling between whether it’s nominal or things inside are immutable. You need initializer lists and stuff like that. But at least in terms of being immutable and shareable, I think that could be handled transparently + +WH: I wasn’t talking about shared structs. Simple structs. + +DE: Okay. Yeah. I don’t have a solution to that yet. This is falsely coupling mutability. And nominal and having methods. But I think that’s okay because similar to how we coupled privacy with classes, when you have something immutable it’s often data. False coupling. + +WH: Okay. + +SYG: Yeah. DE mentioned initializer lists. The biggest—I agree. That it’s pretty cool to have immutable structs, shared structs notwithstanding for now. The problem is basically, if you want user constructors, like the way it works today in the proposal is that you get this one shot initialization by the time any of this value is escaped to user code it has the own properties that are declared already on it. And then, you know, you can mutate it. But if you want something to be born immutable, that won’t work. So then you have the problem of, well, how do you limit the access to this thing while it’s in this initialization phase, after that is over, it is then immutable. + +ACE: Yeah + +SYG: I don’t know the solution to that. If we do, that would be nice. + +ACE: Yeah. But a similar thing in Java's Valhalla, for them it’s easier because the syntax isn't `this.field =`. You can say `field =` and it understands that. Like, I have ideas in mind. I can share in Matrix later. I'm pleased the possibility of an immutable struct is around. + +SYG: And to finish the thought, if we bring shared into the picture, there, it’s actually a harder requirement that the shared struct instance *must* not escape until it is basically fully done. If you want that shared structs to be immutable, you basically can’t run user code until it’s like fully baked and can escape to user code which means it can escape the local thread because if you kind of let it escape when it’s half done, you could get into badness basically. + +DE: SYG, does my story about records and tuples, if they contain all sharable things being shared themselves, does that seem plausible to you? From WH’s question. + +SYG: I think so. I am not sure—like, it’s possible. I will say it seems possible to me. I need to think through. I imagine it’s something like that if all the—if it’s immutable and all the things that are used to—in the literal sine syntax, I am privileges or shared structs that automatically marks the—the record or itemize that comes out of it as a shared thing. It’s a derived property from how you construct it. Is that how you had it in mind? + +DE: Yeah. + +SYG: That seems possible. But I don’t know if that is—I don’t want to say if that’s desirable yet because it’s—because one maybe the user—there is some cost in allocating a thing in shared space, if you don’t ever intend it to be shared. Right? So maybe you want to let the programmer control that intent need instead of sharable and therefore we put it in the shared space. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_19] + +ACE: Thanks. I am going to move on. So a kind of problem case that centres around, we have maps, we have had maps language for a while now. They are great and I’m pleased we have them. But really, it does just feel like the only thing I can really use as a key is strings or numbers. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_40] + +ACE: Like, in terms of generally programming, yes, I could put true and false as a key. Objects as keys and it happens. But generally when I am using Maps, use Maps, it seems like the majority of cases is, the key is a string or a number. Because that’s the thing that kind of works best. If you have `Map.groupBy` it works fantastic, when I am grouping by a single numeric or string value. But then I very quickly get complexity when I am now grouping by two values. If I thought I could return like a pair, that’s fundamentally wrong. I am creating a new object every time. So I am not grouping by anything really. So the thing I—I have no data on this, but anecdotally, people use strings because they are in the language. This works. You can construct a string that kind of represents the multiple bits of data. And now, that will groupBy those values. But this has a bunch of flaws. Or like annoyances, now I have a map filled with string keys. When I iterate over the map, it’s a nuisance to extract those values back out. And I also see people typically use things like `JSON.stringify` here. Which may work until when the key order changes between different objects and then it breaks and they don’t notice. So I don’t really feel like we have a great answer for what people should do here. Today. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g33754a471b1_0_0] + +ACE: The thing you technically can do, and I don’t see this happening a lot is, construct composite keys using objects. If you use the object identity with maps so you can get the keying behavior you need, but it’s still a descriptive object with separate bits of data inside and rather than it being compressed down into a string. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_76] + +ACE: And that—you can do that today, in user-land that need any any proposals to build that and there’s a build of npm libraries that do this. The way they fundamentally work, they kind of tak the vector of things you create, and then use a series of maps to kind of walk-through that vector to refine, in the infinite space of all possible JavaScript values, like the point where that value lives, and then they create an object there and it becomes the object that represents this. And then if you just used Maps for that, this would just leak like crazy. So reduce this leaking, you use WeakMaps for the object part, and then you can also use a FinalizationRegistry to clean things up as well. Most things I use, they use the WeakMap part of it and he they went use the FinalizationRegistry trick so they leak a bit. This is doable in user-land today. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g3315cd10b42_0_10] + +ACE: The pros, you can do this today. You can write JavaScript. And it just works because it’s using === equality. Everywhere in the language you have equality, it works. All the different equality iterations we have, they agree on, what happens with two objects. The con here, that is evident from this, is a lot of overhead here. You create like a key of n things. You are creating n maps, potentially for every key, even if it only varies a little bit, you create all new maps. When you go into one direction, after that is all new maps. You are creating a lot of objects. Another thing I think happens here, I don’t have like data on this, and I am not like a browser implementor, but I think the garbage collector hates this a little bit. I think what ends up happening, you get a lot of references from old space to new space. You don’t get something like the generational benefits with using this pattern if you really, really stressed it. I would be kind of interested in maybe that’s not the case or maybe there’s some interesting papers on how to mitigate that. I do think that there’s a lot of overhead to this approach. I think an approach built into the language that will use the similar technique, I think they could maybe do slightly better than userland, but I don’t think they would be magically super fast to the userland. They would have similar overheads and it does sound like there are complexities with GC. + +WH: WeakMaps work for keys which are objects. But look at the last two key parts on the slide, which are primitives and can’t be put into a WeakMap. Don’t these leak? + +ACE: Yes, so the trick here is, this is like lots of like NPM libraries, they leak if you create like you keep creating keys, they all share the same object, but have a different BigInt and keep leak because they rely on the objects going away. But the way to mitigate that, which I have done in my implementation is, you have a FinalizationRegistry on the composite key and if that is finalized, you clean up in the reverse direction. You move the entry from this where the map is zero, you tell the map above you to remove the entry. It cleans up in the reverse direction. So that kind of helps these things not leak. It’s very easy to make them leak. If you hold them wrong. + +ACE: The other kind of downside of keys—let’s say records and tuples, similar to what SYG was saying, if these were shared structs there’s a cost to allocating shared structs. If you were creating the immutable data structs, just to get the immutableness of them, and not because they are planning on doing `===`, they will pay a cost creation time that they will never claim. You have to do this work eagerly to do this at all. That’s a bit after shame. People might say “please don’t use this here because they’re too slow to create”. + +ACE: I’m pretty sure this causes GC complexity that we could see as an opportunity for some really good GC research or say that’s the reason we wouldn’t go in this direction. And you have to—I think there are people in committee that want negative zero and in turn means you don’t get negative zero. It’s another thing that the NPM library always get wrong. They always forget about negative zero. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_47] + +ACE: So going back to my meditating in the desert image, I was thinking you don’t necessarily have to intern these thing. What we have is an opportunity. If we added these to the language, we kind of get this one time or this one opportunity to decide what the semantics are. And we could say that these new things added to the language have new behavior, like, they introduce a new capability to steal a word from DE’s slides that haven’t been presented yet, we can say in particular APIs in the language even if the things are not triple equal to each other, they could still be treated equal in certain APIs. + +ACE: So maybe we could actually say when you use these things in a map, the map kind of checks them, sees they’re records and tuples and applies the notation and this line here would work. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2af82517ce6_0_31] + +ACE: Why could we possibly do that? I think we could do that if we wanted to because they wouldn’t kind of violate any of the kind of requirements we would put on maps and sets today. Like, these things would be stable, fixed in MM’s kind of world. The equality that they could offer would meet all of the existing kind of equality rules that we need for the map and a set. So they wouldn’t violate kind of any of the things that we would want a map and a set key to do today. And it would be backwards compatible because these things don’t exist. + +KG: It has not been clear to me throughout this presentation whether these were triple equals or not. + +ACE: It’s not clear to me either. I think they can’t be, because I don’t think we’ll be able to – I’m putting words in implementer’s mouth. I think implementers will say, yes, that will be too slow because maybe there’s a way that interning costs could be reduced. From everything I’ve been told and everything I kind of understand about the engines, I feel like the same view that we adopt from these being primitives applies here and that them being triple equal would be nice from a semantic point of view but I think the fact that these things are actually going to run and be performative means they wouldn’t be. + +DE: I’ve been assuming they’re not triple equals as well. And this is because the previous time that records and tuples were proposed for Stage 3, we got extremely explicit feedback from implementers that neither strategy would work, not interning because the cost is too high and doing the deep comparison in place because it’s too important that triple equals on objects is just a pointer comparison. So the proposal that Ashley is talking about tries to work within that + +JGT: When I hear about records and tuples I think about react and the way react deal with its property updates. For folks who aren’t familiar, react will rebrand the component if any of the properties that you pass it are different. And different is `Object.is`. And so what is interesting about this case that might be possible is there something that could work with react that would be backwards compatible enough to make it into React that would solve that problem? What is nice about it is it’s not the user doing triple equals or calling `Object.is`, it’s the framework. This problem is actually a huge problem for react development where if it’s a string, you just pass in the prop. If it’s not a string, you have to bend over backwards and do all this crazy stuff to make it work. There’s many libraries whose job it is to make it easier. You still screw it up all the time. I wonder if the way to approach this problem might be to look at various use cases, not unlike the previous discussion we had earlier today and that even if you can’t solve all of them, can you find particular use cases like maps, like react or like whatever that might have some at least partial solutions that could add value. So I think my main recommendation would be to try to carve out the use case space and to say, okay, you know, here is some—I’m imagining some grid of here is six use cases for this kind of problem, here is the various approaches and here is which one is better and might be a good way to visualize this. + +DE: We’ve done that exercise very extensively in the past six years about this question in particular. We have gotten feedback from the react team they don’t want deep equality done in these cases in particular. If you have the tree of state passed down through props and that sort of AsyncIterator and stepped down and if we did a comparison that would be much work. This suits requirements by providing identity comparison that they would continue to use by default. In certain cases, maybe you want to opt into structural comparison but the good thing about this proposal that we got negative feedback from the react team about the previous version of the proposal that only offers structural comparison. They said we want the fast where we just do identity comparison. So in that sense, this meets the – + +JGT: Can you clarify if you don’t have triple equal support, how do you get identity comparison? + +DE: Let’s go through the rest of the slides. Because it answers this question. And then we can do the queue afterwards. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_23] + +ACE: The way I was imagining this would work is when you create these things, they’re tagged with the internal slot and that could be a brand check from that. The equality would still be recursive even if these things are only shallow immutable. If the values inside them are still other things with this tag, then the equality would still be deep as far as you stay within records and tuples. So as soon as you hit a mutable object, you are falling back to referential of equality of that point. These things like implementation details would be part of the spec, but they would spec the same way to Maps and Sets. They can still use hash codes to help when you’re putting these in Maps and Sets and not necessarily able to compute this every single time. You can cache the hashcode much like you can do it with strings. This works like crucially a big thing about why this would work in Maps and Sets to—without changing them is that these things have this tag from birth. You can’t put an object in a map and a set whereas compared by reference and then later install this slot changing its equality. When you put it into the Map and into the Set, its equality semantic never change. And then going back to the earlier why I think syntax is nice here, because these things have to have the slot from birth, it means if the API is give me an object and then I will turn it into a record, it means that you have to kind of create like double allocate everything as opposed to syntax thing which means you can immediately jump to creating the kind of final result. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g3350f6676e7_0_14] + +ACE: So the advantages of this approach would be we don’t pay the complexity of interning. We have the choice if we wanted to to get these things to work with the APIs we already have in the language. Not all APIs wouldn’t work with ===, but they could work with Maps and Sets and other upcoming proposals like uniqBy is replaced by composite keys could be useful. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g3350f6676e7_0_9] + +ACE: The down sides of why we might not want to do this is that JavaScript already has four different forms of equality. Adding five, you know, is a hard pill to swallow. Interning wouldn’t cause us to do that. We could completely replace SameValueZero everywhere we use it we do this. That’s probably a terrifying prospect. Modifying maps and sets to have new semantics, you know, that’s really annoying for polyfills to do more. Maybe that’s why we wouldn’t want to do this. I think purely coming at this from the end JavaScript user, I think they are really the ones that would benefit if we did do this, because if we had new APIs and created a new type of Map, CompositeMap and then a JavaScript said when would I choose a regular Map and CompositeMap? The answer 99% of the time would be you can use a CompositeMap that goes if you’re putting strings and numbers in it, it would keep working the same. If you are putting the records and tuples in these things, you always want the record and tuple value. It would feel like a shame to have a map and then a second time of map and really we’re almost presenting like there’s a choice. But there’s kind of really isn’t a choice. The reason we’re doing it is that these things layer that is similar to kind of polyfill, maybe similar to implement the engines as well at this point. Purely from the JavaScript perspective, I think it makes sense that these don’t introduce new things. I can see the argument of why we wouldn’t go that route? Let’s discuss. You can only back to the react thing. I can see there’s stuff on the queue. + +DE: Just a clarifying question about the last slide, what are you suggesting that it be? Should it be replacing SameValueZero or suggesting something else? + +ACE: So if it was up to me, I would replace SameValueZero. But I can just sense from meeting people that that won’t be palatable. I would love to be able to convince the committee that it was. I just can see that being an uphill struggle to convince people. I’m prepared to fight for it. + +ACE: With the React thing, we can’t really in a backwards compatible way, these things wouldn’t be object the same if they’re not literally the same object. If react wanted to switch, they would need to switch to the equality predicate. When we talked with the React team years ago about whether they would recommend records and tuples, they said probably not with the old semantics because they actually prefer the React compare approach and more granular local updates. The thing with the React compiler thing is you’re always—you’re kind of creating a very unique comparisons and unique for call site and say you’re creating a record with ten properties, only one of them changes. The compiler can say I only need to check that one property but one property is changed. And then I know if the rest—I don’t have to check the rest because I can see at the React compiler stage, they don’t change. So that’s the way that React is are now solving the problem. Where Records and Tuples don’t help if it’s the local thing and multiple data sources creating data all flowing into one React component. And then that React component still wanting to normalize. That’s the case that I’m still hearing from React developers. They’re saying even with React compiler, we still want === for that case. But I think it’s really reduced the react to use case now that you have the React compiler. + +JRL: Specifically between DE and JGT, JGT suggested use records and tuples for props and DE responded with trees and different point of view. If we had it for props, the discussion we just had with React compiler making it less necessary but we still need it for the individual sub-props inside of it. It could be used there. But for the React VDOM tree is absolutely horrible. It turns what is currently linear algorithm is quadratic and that was Daniel’s point and two different discussions discussed. + +DE: I was talking about the state tree and it’s horrible for – + +ACE: A lot of people said records and tuples would be great for the VDOM but it’s exact opposite and conduct for the VDOM. + +WH: Maybe I missed it, but what does equalRecords do when it gets to a nonrecord? Is it sameValueZero or something else? + +ACE: Yeah, sameValueZero. We would have to do a modified version of same value zero. It would be the new SameValueZero that takes Records and Tuples into account. + +KG: Since this is a temperature check, I am positive. That’s the main thing out of Records and Tuples and composite keys that work with automatic APIs like groupBy and uniqBy and Map and Set would be great. There’s a handful of details. Also doesn’t necessarily require syntax to work. That was an old proposal for doing this just as a composite key built in. + +ACE: Yeah absolutely. + +KG: And anyway, this is the main thing that I personally want out of records and tuples. I would be happy with this direction. + +MF [via queue]: +1 to everything that KG said + +LCA: I phrase this as unfortunate we cannot make this work. So it would be very, very cool if it was still possible for the built-ins that are sort of deeply immutable like, for example, all the new Temporal types if they could work inside of composite type—sorry, composite key. I don’t know exactly how that could work. If this is Tuple or is Record internal slot, like, would it be possible for us to down the line once this ships or if this ever ships to add that to the Temporal types, for example? I mean, there’s obviously the backwards compatibility concern if you have a set that consists of multiple different like Temporal type objects, whether that would work and there’s the other question of like if there’s no way of doing this in user land like you can never polyfill Temporal correctly, which would suck. But it would be really cool if we could investigate this as part of that. + +ACE: The way I think you—if we can go back in time and maybe many years ago, we purposely chose that we wouldn’t tie Temporal to Records and Tuples, that I think was the right choice because Temporal would be even further away from Stage 4 and it’s so great that Temporal is close. I think the way we could do it is developers—we could have toImmutable or toRecord or toFixed something that you could upgrade it to the point where it’s—I don’t think we would be able to be compatible and just automatically frozen in this way. I think it’s too late for that. I don’t think we’re completely cut off from it being nothing in this space. Did you want to add to that? + +DE: Yeah. I don’t see that path that you’re describing as particularly reasonable. It would be kind of annoying to write that all over your code using Temporal. But in general for Temporal objects because you could put them as Map or Set keys, this is something that we would have to decide now. You don’t need records and tuples to make the structural comparison of them relevant. Previously with Temporal, we had custom calendars and TimeZones that offered their own extra identity issues. Now the only thing in play is sort of the prototype itself which I imagine we wouldn’t really want to participate in comparison. Any way, I think it makes sense to ship Temporal as is without the structural comparison. So I guess I agree with the way you originally phrased your statement. + +LCA: If I could respond to that, like, I agree with you that we should absolutely not hold Temporal back for this. I’ll let the rest of the queue go on. + +MAH: I was wondering if it would be solved not just for this but in general if you one able to sold the solution of creating custom objects that are immutable that you can put with the semantics in a record if you could just set the prototype when you sign the record. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_52] + +ACE: So I had a hidden slide. I think we could say this was valid syntax and we could create things with the custom prototype so you can still create things that they are themselves immutable but they have this inheritable methods and you can create something that still has the benefits of operating it in that domain rather than being just plain data with no methods. I think, yes. + +LCA: Importantly the prototype would not participate in equality. + +ACE: I think they would participate in—the two things have to have the same prototype. But that would be it. It would be effectively the same as the other things. + +KG: I think this would be pretty tricky. I think that the questions around this get kind of funny, and I think especially if you are thinking of this primarily as composite keys, the answer would be clearly no, because it’s just a key, it doesn’t make sense to think of it as something other than a prototype. Whereas if you’re thinking of these as more general purpose objects you have to think of prototypes. In other languages you want equality to hold for subtypes sometimes and not other time and it’s a much bigger world of questions to explore once you allow prototypes other than null. + +ACE: I do feel like there is space for custom equality. But that would be a symbol protocol and that definitely wouldn’t work out of the box for Maps and Sets. Because I think kind of violates the thing of once in a Map and a Set, its equality can’t change. For those cases where you want subclasses and then maybe the weighted equality works varies, do you care about the case instinctive string and things? To me all of that side seems like do reverse symbol protocol. The thing is there’s no symbol protocol. It’s kind of set and fixed. + +LCA: Just to respond to this one more time, I think—I agree this is complicated. But as we have seen with shared structs not having prototypes is unergonomic in many cases and a lot of complexity that shared structs is now adding to enable prototypes. It depends on the use cases. It would be nice if you could have a 2d point that you can add to the map and it works correctly. You could still have methods on it. I agree it’s complicated. + +KG: Depends what the use cases are. If it’s just a composite key you don’t care about those things. If it’s more general, you care about those things. It really depends on how we’re phrasing this or what we think the main value is. For me the main value is just composite keys. But I think people have other things that they care about. Not the only user. + +MAH: I like direction obviously a lot of discussion with the Matrix and the question if we have these as objects and they work transparently with Maps and Set and so on, the question is how does new code then introduce these objects and old code that uses Map, Sets, WeakMap and so on behave when they encounter these objects? Like, right now, if you have two things that are not equal, except for NaN they will end up separate entries as Maps or Sets. If you have something that is an object, you are expected to be able to hold it weekly. So there’s a lot of details here that are—that will need to be figured out. And I hope we can figure something out, but I’m worried that there’s going to be more difficulties down the road. + +ACE: I’m looking forward to seeing the conversation in matrix. + +DE: How do you think we should investigate that? + +MAH: I don’t know. I usually feel that libraries are allowed to have these expectations and I don’t know if we can break them that easily. + +ACE: My hypothesis is if this model that we are worried about in the committee wouldn’t hold up in practice and people—libraries taking in third party objects and putting them in Maps and Sets aren’t relying on this kind of the way we think they could theoretically do. I’m biased. That’s what I’m hoping for. + +SYG: This slide, what is no interning overhead mean? Might have missed it. + +ACE: Sure. So creating these things would be roughly—putting the shared struct aside would be roughly as expensive as creating a regular object plus perhaps just a few additional checks. Wouldn’t actually have to then go to a global table, see if an identical one already exists and then actually use that existing object. So you’re not actually having to do structural sharing. + +SYG: I see. + +ACE: These are regular objects and maybe one internal extra filled and maybe precompute like a hash value. Not saying zero cost, but I think less cost than a structural sharing approach. + +SYG: I see, okay. + +JGT: I will just clarify I think JRL was accurate there’s two very different issues around react. I think the biggest developer experience would be for props that nobody really manipulates the VDOM unless they’re really advanced. React developers can do the hard stuff. Anything like Luca’s example like 2D point and have the graphing component of 2D point is easier than crack out the XYZ every time you use it. + +ACE: This is exactly what I was hoping it would do. The feedback was very welcome. So thank you. + +### Speaker's Summary of Key Points + +* Recapped some of the history of the R&T proposal, specifically that the design would need to not add new primitives and that there is no appetite to overload `===`. +* Stated a potential new design that works within the previously stated constraints. The design includes syntax, though this could be optional, shallow immutability, and providing compositeKey equality for existing APIs but does not provide new `===` equality semantics +* There was also discussion on how the proposal might interact with the structs proposal + +### Conclusion + +* Feedback was generally positive to continue exploring this direction. +* Some feedback on potential for complexity when getting into the details, such as if existing code is expecting objects in map to only use same reference equality + +## Use cases for ShadowRealm + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-shadowrealm) + +PFC: Yesterday in my segment on ShadowRealms, we talked a bit about use cases. And I thought I would do a short addendum on how I see this request for use cases, because I think in general if we’re talking about proposals that intersect TC39 and the wider web platform world, I think often we’re talking about different things when we talk about use cases. So I want to say upfront I’ve been involved with TC39 for five years, I have a fairly good idea of what we want in this committee. That is not the case for the wider web platform world. I could be wrong. I invite you to tell me how I’m wrong at the end of the presentation. So as part of my work on ShadowRealms, I ran across this document writing effective explainers for W3C TAG review. If you haven’t seen it, I haven’t published these slides anywhere yet. I will put the link in the chat afterwards. And it is a very nice document that explains what the W3C TAG wants when you ask them for a review of a proposal. So, again, don’t take what I say as an official pronouncement but this is my interpretation. These are quotes from that document that I just mentioned. They ask you to describe the problem that your proposed feature aims to solve from an end user’s perspective. That is not emphasis that I added; it is in bold in the document. They seem to find it important. There is another paragraph down below: Start with a clear description of the end user problem you’re trying to solve even if the connection is complex or you discovered the problem by talking to web developers who emphasized their own needs. That’s an interesting phrasing that says to me that you may conceive of a feature because it fulfills developer’s needs but you need to describe it in a way that fulfills end user’s needs if you want them to pay attention to it. + +PFC: So again this is my interpretation. I take this to mean that cultural norms in that community dictate that "this feature will allow developers do such and such a cool thing" is not going to be taken seriously as a use case. I think that’s what I mean when I say that sometimes we are talking at cross purposes about use cases when we have proposals that intersect these two worlds. + +PFC: So that doesn’t happen very often. I think it does happen in the ShadowRealm case because one of the things we want before advancing ShadowRealm to stage 3 in this committee is integration with web platform APIs and need web platform buy in. It happens for a few other proposals like AsyncContext. But for most of the proposals we talk about this is our house and our rules and we decide the use cases that we like. ShadowRealm lives in two houses and has to abide by two sets of rules. + +PFC: So this is not verbatim any particular phrasing of use case that we have thought up, provided, but this is my paraphrasing. This is the kind of use case for ShadowRealm we’ve been talking about so far. ShadowRealm lets you run third party scripts quickly and synchronously with integrity preservation and allows accommodating building blocks from different authors that might conflict with each other. I think this is a perfect valid use case from our perspective. This makes me think that ShadowRealm is a valuable addition to the language. But it doesn’t mention anything about the end user. I think when we hear from web platform “give us use cases” and we give them this, that’s not what we’re asking for. + +PFC: So this in my opinion might be a way to rewrite the thing that I have in the previous slide from an end user perspective. Large platforms like web applications often allow customization via plug-ins. In JavaScript most built in stuff is overwritable and so badly behaved plug-ins are always a concern. When application writers have a way of segments off and isolating code they don’t control, they can deliver a more stable experience to users. This is short for the slide but I would say something about how maybe for a customer of that platform, you would install 19 plugins and 10 of them are written by the customer itself and have stability in that case because you can’t count on the code quality of the plug-ins, whatever. I think focusing on rather than "this allows developers to do such and such", focusing on what can the developers build that they couldn’t build before is the kind of thing that we need to provide when we’re giving use cases to web platform folks. + +PFC: So that’s my interpretation. I’ve discussed this with a few people and heard reactions ranging from 'sure, that makes sense' to 'I don’t think you’re right about this.' So I’d like to invite discussion here. What do you think about this? What has been your experience with proposals that intersect those two communities? + +MM: So I want to be explicit about supply chain use as a use case. It’s kind of implicit in a lot of what you said. I think it’s worth making explicit and it has a more vivid case for the end user than is obvious from the way you put supply chain risk in your presentation. There have been attack after attack after attack where some third party component was revised to attack the users of programs that use that component. Several of these are very famous cases. Now, JavaScript and LavaMoat by MetaMask, and XS are all trying to provide good mechanisms for supply chain risk when the elements can live within their restrictions of hardened JavaScript. If we fix the overriding mistake, then a tremendous larger number of existing npm packages will in fact be compatible in JavaScript that we can apply all of the supply chain risk in the restriction. A lot can’t live in the restrictions of hardened JavaScript, in which case you cannot protect from each other in a single realm. + +MM: What the ShadowRealm give us especially with the boundary that the committee put on us which we ended up being overjoyed to have accepted is that you can take programs that cannot be run under HardenedJS because, for example, they modify the primordials and run time in ways that HardenedJS must prevent. The ShadowRealms enables a much heavier protection domain which is the realm but enables the same protections between the protection domains without constraining them—you know, the code within the protection, each protection domain within its own that are still protected within others. I think with the JavaScript system, I just want to reiterate the figure that I heard many years ago that I believe is still correct which is the technical JavaScript application according to NPM statistics as of years ago is 3% code specific to the application and 97% code linked in from third party libraries often through third party that many are unaware of. Many supply chain attacks come from dependencies deep down the dependency chain. + +PFC: Okay. I think supply chain risk is a good thing—it’s very easy to explain that in terms of benefits to the user. Thanks. + +DE: I’m wondering if we could get more feedback from browsers on what they think of Phillip’s explained use case. What kinds of evidence would be interesting for you in evaluating whether this is a good use case? It’s okay if you don’t have the answer now. Maybe you could get back to us between—or the champions between now and, you know, some time in the future. + +SYG: I’m not the one doing the evaluation. I think the two houses metaphor is apt here. This is kind of stuck right now because the bar in WHATWG is active from the browser to implement this on the non-JS engine side. That’s the bar that you need to clear. And asking this room what the browser representatives from the JS engines think of your use case doesn’t progress towards that goal as far as I can tell. + +PFC: Yeah, that’s my understanding as well. It's also why in my presentation yesterday I didn't spend very much time talking about use cases. Because my impression was that we talked about those already and this room is pretty much convinced and it’s elsewhere that we need to do the convincing. But if you do have any remarks or meta discussion about the way that we present our use cases, or whether you think my interpretation of what is going to be convincing is correct or incorrect, I would love to hear that. + +SYG: I think that makes it sound like you are asking the JS—the browser JS engine representatives if we would like to help champion this proposal along with you. Is that what you’re asking? + +PFC: No. I’m not asking anybody to do anything. I put up an understanding that I have of the way that things need to be communicated. I’m asking you or anybody in general if that rings true to you. + +SYG: I see. + +MS: So basically SYG said we talked about this yesterday. You know, this is more on the DOM side and the W3C APIs at this point. + +JHD: So ideally our process is set up so that the stage advancements which are intended to be signals are those signals. What I’m hearing is that the browser TC39 reps aren’t the ones—like, it’s a different group or team not making those decisions for everything at least in the browser. We certainly don’t have the capability of fixing bureaucracy ever. So I guess more I would love to understand if somebody understands it and I just missed it, I will be quiet. But what should we have checked before 2.7 or 3 or whatever? Who should we have checked with and so on? To avoid Stage 3 things not being prioritized. And obviously that’s a long-term question. I don’t expect the answer right now. But whatever the answers are, it would great if we found a way to incorporate that into the process so it’s not a problem in the future. That’s all. + +KG: I guess I’m on the queue. I think we have. And we said for things which require integration with the greater web platform, that needs to happen as part of this Stage 2.7 to 3 advancement. We demoted ShadowRealm to 2.7 partly for this reason so we could then have the integration happen. So at least my understanding is that that’s what we have done, we have been saying that you can get 2.7 but 3 requires if your proposal requires integration with host APIs, you have to get sign off from the people that you need to integrate with before you can get Stage 3. + +JHD: Did we not get that from ShadowRealm? + +PFC: No. That’s what I was asking. + +NRO: Kevin said, we’re learning the lessons even if it’s painful. It was in cases with ShadowRealm but then for separate proposals like AsyncContext before 2.7. + +MAH: There’s still something that I’m confused about this specific proposal, this is a JavaScript API and the browser asked that we should have the host being able to add their own API to the global aspect of ShadowRealms so this was considered and accepted. And now from what I understand, we’re hearing that the part of the browser that decided which API go on the global are getting to relitigate whether the feature at all should exist, not just which API—agreeing which API or not are valid to be on there but more whether the JavaScript API that went through the staging process here, whether this is an a valid use case for the web at all. Why is it that this feature requires approval from W3C or WHATWG I’m not sure which one to be added to the listening at all in this case? + +PFC: My understanding is the signal in this committee is stronger than what you were saying. The signal in this committee was we don’t want this feature to exist if it doesn’t integrate the host APIs. The signal was not that we want this feature regardless and the host can add APIs if they want to. That is my understanding. Somebody can correct me if I’m wrong about that. + +KG: I’m on the queue saying that almost word for word. I don’t want this feature to exist if it doesn’t have TextEncoder and stuff. People shouldn’t have to know about the split where TextEncoder is in a different specification than urlencode or whatever. This is completely irrelevant to almost all users of JavaScript. I don’t want to add any features that makes that distinction relevant to them. I don’t know in general which proposals need sign off from WHATWG, but for this specific proposal I don’t want this to exist until it has the handful of APIs that have been carefully outlined that make sense is purely computational. So the requirement is coming from inside the committee. + +MAH: So we brought this on ourselves? + +PFC: Yeah. + +JSL: In my experience, another key part of the process that tends to get missed is just agreement on the problem statement up front, right, and WHATWG, a lot of times what they want to see is give them a chance to agree that the problem is a problem that they’re interested in solving. Before the use cases gets presented. You know, is it something that all of the various browsers and various implementers are all on the same page? I think a lot of times it ends up getting skipped. "We agree and we think it’s the problem to be solved. What do you think of this solution?" It’s like, no, no, "we think it’s a problem. Do you agree?" Does that make sense? + +PFC: That does. + +MAH: So is this going to be a problem with WinterTC APIs? + +JSL: Yes, hundred percent. + +### Speaker's Summary of Key Points + +* We discussed the presenter's interpretation of the differences between what it means when we say "use cases" in TC39 and what it means when someone from the W3C community says "use cases". In the web platform world there is a strong emphasis on the benefits to the end-user. + +### Conclusion + +* None diff --git a/meetings/2025-02/february-20.md b/meetings/2025-02/february-20.md new file mode 100644 index 00000000..cf7b021b --- /dev/null +++ b/meetings/2025-02/february-20.md @@ -0,0 +1,802 @@ +# 106th TC39 Meeting | 20 February 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|---------------------| +| Chris de Almeida | CDA | IBM | +| Samina Husain | SHN | Ecma | +| Eemeli Aro | EAO | Mozilla | +| Daniel Ehrenberg | DE | Bloomberg | +| Daniel Minor | DLM | Mozilla | +| Ujjwal Sharma | USA | Igalia | +| Art Vandelay | AVY | Vandelay Industries | +| Jesse Alama | JMN | Igalia | +| Ron Buckton | RBN | Microsoft | +| Nicolò Ribaudo | NRO | Igalia | +| Kevin Gibbons | KG | F5 | +| Oliver Medhurst | OMT | Invited Expert | +| Luis Pardo | LFP | Microsoft | +| Dmitry Makhnev | DJM | JetBrains | +| Linus Groh | LGH | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Erik Marks | REK | Consensys | +| Chip Morningstar | CM | Consensys | +| Aki Rose Braun | AKI | Ecma International | +| Istvan Sebestyen | IS | Ecma | +| Michael Saboff | MLS | Apple | +| J. S. Choi | JSC | Invited Expert | + +## Decision Making through Consensus - take 2 + +Presenter: Michael Saboff (MLS) + +* [slides](https://github.com/msaboff/tc39/blob/master/TC39%20Consensus%20take%202.pdf) + +MLS: This is a meta conversation, meta discussion. How we work. I presented this at the February meeting last year, so I guess it’s an annual thing. Hopefully it’s not forever. What I would like to talk about is consensus, basically how we work as a committee. So some of this is a review from last time. There’s a few different definitions of consensus. It comes from the Latin word consensus that means agreement. A generally accepted opinion or decision among a group of people, the judgment arrived at by most of those concerned and a group solidarity in sentiment and belief. + +MLS: And since TC39 is part of Ecma, what do Ecma bylaws say about consensus? It’s interesting that the bylaws are actually silent about what consensus is. These are the three rules that you find in the Ecma—there’s bylaws and there’s rules. There are three rules by decision making and talks about simple majority and should not use voting unless it’s required and this is not something for us but the member of after TC has the right to ask for a minority report which they shall provide to be included into the semi-annual report. The interesting thing, these rules exist and all the other TCs and TC39 as well work by consensus, but in all the other TCs a consensus is basically the generally accepted view kind of policy. I think that most TCs do not regularly take votes on things. So what I’d like to do is look at our practice and then talk about it and see, you know, if there’s ways to possibly change this. So for mostly most cases we follow the notion of general agreement. We see that at this meeting and most meetings. After most discussions the moderator will ask do we have consensus or explicit support for Stage 3 of this proposal? + +MLS: And we look for delegates that either, you know, support with thumbs up or see it in TCQ and don’t need to speak, I support this. And that seems all well and good. That’s really good that we are able to operate that 9X% of the time. Occasionally someone will speak up and say I withhold consensus and give a reason as to why they withhold consensus. And that makes the process one of unanimity. We must all agree and sometimes we agree by being silent, but we must all agree for something to move forward or for something that we are discussing to happen. One dissenter blocks consensus. And that’s what I would like to talk about today. There is a truism that a single person with holding consensus is basically we call it a block. It’s a veto. We’re vetoing something. A single member of the committee has the power to decide what we do or actually in most cases what we don’t do and that’s what I would like to see if we can change. + +MLS: So here are some of the issues that I have with the current process. If my observation that withholding consensus is generally used by a small number of committee members, I would add that those who whohold consensus or block are typically more vocal and longer serving members and feel comfortable to speak up. Certainly we have members served on the committee for a long time and prominent in the JS world and know JavaScript and the committee and the language and things like that. But the committee as a whole is seeding greater authority to this small group of people. + +MLS: And there’s actually been cases although rare where a single blocker has ended the discussion of a proposal, basically shut that proposal down. And there’s also been cases that I’m personally aware of that somebody that has been blocked has stopped attending. They don’t attend TC39 anymore. Now, I want us to consider as a committee—I can’t remember who I was talking to. I started attending in 2015. It’s hard to believe that nearly ten years has gone by. But I’m considered a newcomer to the committee. And they view this single dissent policy in action. For some it might energize them. Look at the power that I have if I don’t like something, I can block it. But it’s probably more the case that someone is checking out TC39 for the first time for a few times, they look at how the committee operates and it would turn them off. + +MLS: There’s different personality types. I’m willing to speak up and get involved in the argument, but there’s other people that are more timid. And somebody like that that wants to bring a proposal, they can be put off by our single veto policy. + +MLS: The last thing I want to point out is that we need to acknowledge that our lone veto policy can hurt the relationships within the committee. Yes, we have competitors in the committee, I work for one of the browser vendors, and there’s other browsers that are represented at every meeting. And, you know, my company may see a slightly different view of how JavaScript should do its evolution and we have to come together from the diverse backgrounds. I work on the JavaScript engine and I write some JavaScript and it’s mostly test. I’m a C++ programmer. I need to hear what JavaScript developers want in the language. So we have to come together for the benefit of the whole community. That’s developers and implementers. Now, I don’t want to impugn at all the motives that someone may have in blocking someone although they may think there are past instances to question the motives of specific instances. For me it’s the impact of having a single person being able to block. So our current veto what I call power versus supporter power. Somebody who supports something versus somebody who supports it that basically one veto is one block. + +MLS: Facetiously I said let’s put it in JavaScript. We understand it or should. This is a way of representing how our current structure works. You know, each delegate has the same quote, unquote power when attending a meeting. Collectively we’ll call the total power of one and so you divide the delegate power is a fraction with the denominator is the number of delegates. But the voter power also has one and the denominator is the number of blockers. And so as soon as you remove a blocker from a supporter from the number of supporters, you’re going to—the vetoes will win. And this maybe is more advanced than JavaScript needs to be. If the number of vetoes is greater than zero, the motion is going to fail, whatever it is. I do want to point out at this point that according to Ecma bylaws, only delegates should be allowed to vote. An invited expert, I don’t think, should be considered as somebody who is blocking. We’ve been generous in that, but I just want to point that out and that’s maybe a separate discussion to have. + +MLS: So what I’m proposing is that we have a policy where we need 5% of delegates to block something with a minimum of two. If you have 40 people, 5% is 2. So less than 40 people and we typically are more than that. But less than 40 people, I would think that 2. Why did I pick a minimum of 2? My theory is that if I was to block something on some kind of principle that I would be able to convince at least one other delegate that’s attending that my reason for withholding consensus or blocking is reasonable and they would support me. If I can’t do that someone can’t do that, then I think that that’s a reason why they shouldn’t be able to block. + +MLS: So once again, I put this in JavaScript. This is a set of instructions that describes this. But basically, we give what I call the power of a veto versus the power of a delegate, that they’re equal and that we do it based upon some percentage. Like I said, 95% or more of passing, then something passes or 95% or more of those that support something that they—that it pass and less than that that it would fail. So this is basically my proposal. And I haven’t—I put 45 minutes. I expect there will be a lot of conversation. Translating this back to English, this is what I propose is that to block or what we call withholding consensus, we need 5% or a minimum of 2 vetos and which ever is greater to block something. So I don’t see the queue in front of me. But let’s go to the queue and let’s have some conversation. + +USA: Reminder that we have a little over 30 minutes. So let’s navigate the queue accordingly. If you permit, I’ll start in order. First we have JHD. + +JHD: Yeah. So your presentation is two parts: The problem and the proposal. I completely agree with every aspect of the problem that you described. I wanted to talk a little bit about the benefits of our consensus process in that I think that we are one of the best functioning standards organizations out there based on my experience in others and conversations with folks who have had experience in others. I think that is because our consensus process assuming everyone is always in good faith, our consensus process ensures that all of the—what’s the word I’m looking for? Each of us in here represents some percentage of the ecosystem in some way. Hopefully we have a hundred percent of the ecosystem covered in this room. That is probably incredibly wishful thinking but hopefully it’s at least approaching that and that’s the goal. That does not mean that everybody in the ecosystem has a conceptual representative in this room, because we don’t have hundred percent. But the hope is that that is the case, is that even if only one human in this room conceptual represents someone, that person has the voice by proxy. The consensus process ensures that majorities can’t overrun the minority. That of course resulted in spec designs that aren’t ideal at times, I think most of the time it has resulted in better specifications. And especially in general for language assigned, but I think especially in JavaScript with web compatibility to be concerned with, the much higher priority than getting things shipped is not shipping the wrong things in the sense there’s that quote in software engineering no is temporary and yes is forever. It is safer to say no and iterate and think than say yes because we can’t walk it back. That’s conceptually true in majority of scenarios you can’t walk it back in the company’s product or software product that is installed with a versioning system. And node, for example, have major versions and break things and drop things. That doesn’t mean that they can actually remove stuff if enough people use it. But in JavaScript in the web, like, you know, the threshold is much lower for something to be unremovable. So I think we should acknowledge that even though all of the problems you describe are real that there are a lot of benefits in that it makes sure that we go—I was reading this quote the other night, actually. I think it’s from the navy slow is smooth, smooth is fast, and I think consensus helps us go smoothly. + +MLS: Let me counter with a couple comments. I’m advocating for consensus. We don’t have consensus. We have a single veto. + +JHD: Sure. Let me rephrase. Unanimous consensus, hundred percent consensus. + +MLS: That is unanimity. + +JHD: Yes. + +MLS: As far as us being one of the more smoother operating committees, I would disagree because—and you’re probably aware, there are instances in the past where are our single veto has been what I would call a code of conduct violation. + +JHD: Absolutely. + +MLS: Okay. And that’s not acceptable. + +JHD: I agree. + +MLS: Okay. And then I go back to the point that if I want to block something, I should be able to find one other person present that would agree with me even if they’re not from the same, you know, faction of or resonating the same part of the ecosystem, they would agree with me on principle that, yeah, you’re probably right. + +JHD: So I would agree with you except that one of the problems you cited is not everyone has the personality type that they want to stick their neck out and speak out. Those very people are going to be the ones that are not going to be standing up in solidarity with the otherwise lone veto. In fact, in practice, even this room that is arguably skewed towards people who will speak up, there are a number of times when I have been a lone veto and had three or four people privately tell me they support what I’m doing but they just didn’t want to speak up because there was no need. Maybe this would create the need for the second person to speak up or whatever inhibited them from showing solidarity in the first place would still present itself and then a thing that should be blocked isn’t. + +MLS: So I think it’s harder to be the first veto and it’s easier to be the second and joining somebody. I would hope that would be the case. Maybe we could work on that, that we promote people to do that. I find it a little frustrating that you had the instances where you block something and people afterwards in the hallway say I support what you’re doing or speaking up in solidarity or anybody else. + +JHD: There’s lots of frustration including when I’m the one blocked by lone veto. + +RPR: I’m really pleased to hear the agreements, the levels of agreement that we seem to all be—JHD and MLS acknowledging the same kind of problems, I would say I think I can speak from the chair group that we have seen those kind of problems as well. But I also want to speak to JHD’s point about the benefits of our process that in general it does seem that the overall process we have today works quite well. These problems that we identify in general I would say the points at which these become so problematic that they get escalated to the chair group happens roughly every say 12 to 18 months. So this is not an every day every meeting, every item. The point at which this comes to perhaps you might deem a code of conduct violation and there are all sorts of reasons when something is so outrageous that people want intervention, I think that’s more on the less frequent, that perhaps Michael what we’re trying to discuss here is something that is – + +MLS: I say this I don’t think we had a single blocked decision that I can recall. Maybe we have. But I think we have been following consensus what I call true consensus this meeting. + +RPR: This meeting worked well even in cases where only one person said they’re blocking, we have felt that was representative of a… + +MLS: Yeah. + +RPR: Thank you. + +SYG [via queue]: Can you speak more to "smooth", comparatively to other bodies? + +JHD: I’m not talking about the feelings and sentiments of those in the room. I’m also talking about the quality of the APIs produced. + +SYG: I was wondering about like an occurrence or something, what do you mean by smooth? + +JHD: So I’m looking at the time spent beyond just the development and implementation of a feature but also the adoption, education, and usage of a feature over a long period of time. And I think that JavaScript has a better track record with those things than other bodies that I have seen or experienced. + +SYG: What is another body like that? + +JHD: I don’t want to be too specific about the things that I’m maligning because I’m trying to be diplomatic. But, you know, the favourite example that I already stated publicly in the past there’s a can play audio API on the web that returns the string, probably the string maybe or false [https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/canPlayType]. And I haven’t heard anyone ever defend that API and that’s a tiny example which is not comfortable mentioning. + +SYG: By smooth, you’re judged quality of the output of a committee? + +JHD: Including that. I’m including the smoothness of uptake and usage and understanding of it and sentiment of the community of it over time. In other words, I’m suggesting even though the current process produces frustration and inner personal tension in this room, that while I would welcome a better alternative, I think that it produces—that the trade off is that the world suffers less in return for us suffering a little bit more. + +NRO: MLS already said this, it’s true it’s difficult to block right now. If we require more than one block, then being the first person blocking and second person of support is much easier. I’m not alone in doing that. And my other point is you said no is now and yes is forever. That’s a nice sentence but not true in practice based on how we block things. Very often it happens that proposals get blocked or at least slowed down because they’re missing some feature like next person judgment on what you use it for but the example is branch X and say this proposal need to be rewarded because I need the work. And left the proposal to go ahead and that’s a temporary yes and if you had it now it’s forever. I feel like half of the time we block things it’s for things that actually can be fixed in the future. + +PFC: I think JHD gave the example of being an objector to somebody and receiving messages privately later saying 'thank you for doing that, I agree, but I didn’t want to speak up.' I think when you publicly dissent or you veto something there’s a certain social cost that you pay. I’m not sure that the system we have currently makes that cost be paid by the right people. So I think it’s natural that you do pay a certain social cost. I think that’s correct. You shouldn’t just be able to veto things without any consequences all the time. But right now, let’s say if JHD vetoes something and other people would also like to veto it but don’t speak up, JHD pays the entire social cost and the others kind of get away with it for free. think a proposal like this, that would force those others to also speak up and share the cost, lowers it for everybody but also—like, I don’t see the problem that this would let things get through that shouldn’t get through. I just see it as the social cost would be borne in a fairer way, that’s my opinion. + +MLS: Let me add a comment to that. I think in the proposal is blocked say to the next stage and there’s a lone blocker, the champions are going to go to the blocker and say what do we need to do to change? With what PFC said, now you have more people that may have different concerns that the presenter or the champion is unaware of and now they would be possibly more aware of what they needed to do to modify the proposal moving forward. + +JSL: Yeah. I definitely think the proposal here is good. I would like a little bit more. A lot of times people say I don’t think this should move forward. That’s a statement of opinion. A lot of time we take that and interpret that as a block. I actually think we should make “no, I don’t think this should move forward” as a formal—more of a formal thing, right? You’re putting a motion on the table and then it has to be seconded by somebody. And that has to be something very explicitly stated I don’t think they should move forward. But even when the committee agrees to table this, the expectation from there is the champions will work with the folks that are blocking it and try to work out the solution and if they decide with the champions that there’s no path forward to resolve the block, then it’s considered the block. Then this thing is not moving forward. If they can work it out, the expectation is they will work together to figure it out. As the final stop gap, the chairs should be empowered and to be able to say, we have noted the objection, right, the committee seems to be, you know, not behind that objection, we’re going to move forward any way. There’s problems with every approach. + +MLS: Yes, there is. + +JSL: But I think if we formalize it better than just two, I think we will have a much better result. + +MLS: I agree. My biggest concern is I don’t like the current policy. I think we should have something that has a little bit input from the committee to block things. + +JSL: I will stress this is not unique to the committee. We have this exact same problem in Node.js. If we can solve it here, fantastic. I will take it over there too. + +DE: Just to emphasize a problem that JSL said: it’s often ambiguous whether somebody is blocking. This really happens all the time that someone says “I don’t think something should move forward” and it’s ambiguous whether it was blocked or whether the presenter voluntarily took it back. Even if the chairs intervene and participate in the discussion, it’s ambiguous whether the chairs are mandating a procedural decision, or suggesting something that people voluntarily listen to. This allows us to have certain sorts of standing disagreements about what is actually going on procedurally in the committee. I really like the idea of JSL formalizing it some way or another and encouraging explicitness. + +MM: So I use the queue a bit as an outline to remind me of all the points I want to make. Excuse me for that. I will combine them all. So we are a social process as well as a well thought out process. It’s important to understand and I think this is borne out by my experience on this committee that the systems that enable human cooperation, some of which are formal rules and many of which are social norm, and as far as what are the de jure rules we’re operating under, what are the game theory that follows from the rules and taken from themselves, you’re absolutely right this is unanimity. I think it’s valuable that we don’t label unanimity and that is overall documentation of the how we work does state a rule that is gainfully equal to unanimity and in the context of norms with a different flavor. The world unanimity has denotation and consensus has denotation and connotation. On the connotation we’re close to consensus. On the rule we’re absolutely close to unanimity. Specifically when there’s a lone dissenter, and I’ve been the lone dissenter on a number of occasions as you know, when there’s a lone dissenter, there’s a set of social norms that are very much felt by the lone dissenter, and I have often yielded sometimes enabling the committee to make what are in retrospect obvious mistakes that I wish I had actually not yielded like the override mistake. I was the sole dissenter there and under social pressure, I yielded and I wish I had not. I had been on the other side. I pressured WH to [?] SharedArrayBuffer. He was not. He was the lone dissenter. This is before we knew about Meltdown and Spectre. He has look ahead. We have been saved over and over by him as sole dissenter that everyone was on agreement on. I joined the committee during the worse political days of the committee where essentially everyone on the committee except for Doug Crawford initially as the sole dissenter everyone on the committee had wanted to move forward with EcmaScript 4 including all of the browsers. And then if we had operated on your rules, we would have accepted EcmaScript 4 and building on that and JavaScript would be as useful to the web as ActionScript is. Obviously we have made mistakes on the other side that speak to your side of this thing. But the point I want to emphasize is that the thing that overcomes what seems to be the simple game theory of unanimity is the system of social pressures and what it amounts to is that a strongly felt dissent blocks a weakly felt dissent given good faith operation is often overcome by social pressure and the person yields. + +MLS: Do you want me to talk about that? + +MM: Yes. + +MLS: I agree with what you’re saying. The thing is the social pressure, we have to take into account different personalities that are involved in this social contract. And I would say that you have a strong—you have a willingness to speak up when you think of something is wrong or conversely when you think it’s right and others think it’s wrong. But I think that we have a disparity among those who have initiative in a setting like this. And I would put myself as more of a stronger—I’m willing to give my opinion, right or wrong at times, and so the social contract we have to assume that there will be variability among the willingness to restate positions. + +MM: Absolutely. Variability in willingness to state and variability in willingness to hold to a sole dissent position and block under social pressure to the contrary and that variability is two sources. One is how deep are the genuinely felt good faith technical objections in one’s head whether they can articulate them or not and the other one is to what degree is the person responsive to social pressure? And there’s no way to separate those two. + +MLS: I agree. + +MM: Okay. So another part of the norms, not the rule, part of the norms that come from being a sole dissenter and I’ve seen this again over and over again, is that it’s kind of your responsibility to explain why you’re objecting. And sometimes it can be hard to state, because sometimes they articulate what are felt and turn out to be valid but still it’s a strongly felt social pressure to explain what the objection is because what you’re trying to do is empower—this is another thing that I think is really important. You’re trying to empower the problem-solving ability of the entire committee and especially the champions of who you’re objecting to, to figure out how to move forward by refactoring the proposal in a way that does address your objection. Because the objection is not to solving the problem that the proposal is trying to solve. I’ve never seen that. The objection is how the proposal proposes to solve the problem. And over and over again what happens when there is a sole dissenter that is able to explain why they’re dissenting, not always but this is by far the majority of the cases, is that the problem-solving process is engaged and often the proposal is refactored in such a way, you know, revised in such a way as to meet the objection and the proposal is often better for it. + +MLS: So I would agree with you in the cases where the objection is dealt with. There are—you would agree with me, there are cases where the objection stops the proposal. + +MM: Yes, absolutely. + +MLS: Even though the committee believes that the problem is a problem that does need to be solved and I would stipulate that I think there are cases where we think that the proposal is aimed in the right direction. Maybe not perfect. So I think we need to be careful and be generous to say this give and take is good in all cases, because sometimes – + +MM: So I think that’s again addressed best through norms not by changes of the rules which is it should be everybody involved, especially both the objector and the champions, should be reminded by the overall social system that the objection is an objection to the way in which the problem is solved. I don’t think there’s ever been a case where the objection was to solve the case for a lone dissenter or the objector was to solve the problem at all and the problem solving dialogue should proceed from there. Sometimes it can’t be solved. + +MLS: I disagree that there are times in the past where it can’t be solved. + +MM: I won’t debate whether that happens sometimes. + +USA: I have a point of order. There’s around 7 minutes remaining and a lot of items on the queue. + +MM: Let me just make two more points. One is the browser makers have a defacto veto anyway. And any rule like what you’re proposing does not solve the fact that each browser maker has a unilateral power to veto anyway. I will just mention ShadowRealms and decorators, they’re not—you know, if it were the case that all the browsers but one wanted to do it and one browser maker was saying, no, we will not implement it, the committee would understand it is worse than useless, it is counter productive to move forward to standardize it. So if the browser makers want to go off and have a collusion among themselves as to what they will implement ignoring the wishes of users of JavaScript, they’re free to do that. We can’t stop them. But they should stop pretending that their participating in an open process. Under the rule we have, we have an open standard process that empowers JavaScript users to have by the rules similar power as the de facto veto power that browser makes have. + +MLS: So I don’t think this proposal makes that worse, right? + +MM: Yes. Yes, it makes it much worse. It disempowers the community compared to the browser makers. + +MLS: So the browser makers if two delegates – + +MM: If one browser makers declares we won’t implement it no matter what the rules say, you cannot solve that problem. It’s dead if one major browser maker says “we will not implement it.” + +USA: Maybe go on with the queue. MLS, if you would like to ask for consensus by the end, we should probably also earmark some for that. + +DE: I don’t think that makes much sense. + +USA: I don’t see why. There’s rob and Phillip have responses on the queue, for instance, but there’s – + +NRO: There’s 13 items on the queue. We cannot reach consensus if five minutes about anything of this. + +USA: You meant about that. No, I mean we have four minutes now. We can certainly not resolve. Michael, what would you prefer? Have you finished with your comments? + +MLS: I hit the major ones. One more thing. This is something that I mentioned last time we discussed it but worth reiterating and it feeds into a point that JHD made, any rule system, the first time you propose this how to game it? Nine ways to game it come to mind. Given that any set of rules can be gamed, the real choice we have is if the rule has a pathological outcome because it was gamed, does it feel safe or feel unsafe? Because of this no is temporary and yes is forever, the rule we have got, the only rule that fails safe. And now I’m finished. + +MM: Any chance expanding the time box? + +USA: Good question. I think we are booked for today. But let me ask my co-chairs. Do you think that we could have – + +CDA: It would have to be after lunch. + +DE: We have the whole afternoon currently reserved for break out sessions that I proposed. I wonder if this could continue in a break-out session or we could also make an overflow item which is the whole group? I would be happy with either one. I think this is an important topic to continue discussion on it. + +JSL: We did tell the transcriptionist yesterday we are finishing roughly around noon. + +DE: We should 80%. I would propose break out session or plenary continuation. + +MLS: I want the queue to be heard. + +DE: Could we do that half an hour or hour eating out of the breakout session and everyone can go through the queue items? + +USA: I think we should be able to do that. I think it’s up to you Michael if you would like it to be a breakout session alongside the other one. But I think you can talk to rob in person or us online and figure it out. + +CDA: Is it possible in terms of helping make this decision, are the break-out sessions going to be limited to the in-person attendees? + +DE: No. We will have people attend the break out session. + +NRO: There are four people on Matrix asking for this to be a whole group topic than break-out session topic. + +## Continuation: A unified vision for measure and decimal + +Presenter: Shane Carr (SFC) + +* proposals: [measure](https://github.com/tc39/proposal-measure/), [decimal](https://github.com/tc39/proposal-decimal/) +* [slides](https://docs.google.com/presentation/d/1050DHlNOzcN-8LqJQ_6z8j-LryXgEqOcLfcVzkhJyEk/edit#slide=id.p) + +SFC: So I prepared these slides based on some of the feedback that we have received when we brought this up earlier in the plenary and reviewed this with the champion group and will be presenting this today. So let’s go ahead and get started. First thing, this is a great time to go ahead and do a mini announcement about a delta that we have made based on feedback from primarily EAO and others about the name of this type that was in the presentation yesterday called measure. Amount and why? It is both current and currencies and strongly suggests it is approachable and lightweight. I will be using amount instead of measure for the rest of the presentation. One thing that I feel we missed a little bit about yesterday’s presentation is we weren’t aligned on what is the scope that we’re proposing with the type called amount. I want to talk a little bit about this. I wasn’t at that time prepared to answer. I prepared the answer for it now. Why do we need amounts? Why is it motivated? Why is it important to have? + +SFC: I went ahead and prepared a slide to summarize some of these key points. So one is it represents the thing that many developers frequently have that is a number appear with a unit. By representing this, we can do things like offer useful on and what we can do on the data model the better we can do. The second problem is it fixes some certain specific item or problem that we have because if you take this thing and you use it in multiple different formatters, they all need to know about the identity and the nature of that thing in order to do the correct behavior. I use the run don’t zero problem all the time when I give the presentations. The amount proposal addresses that problem by using the same type of plural rules for example. Three Is that it is a prerequisite for the messageformat proposal medium term. There’s some concern we’re working through. + +SFC: But the messageformat specification recommends that they have this type in the data model because when things get shipped and then formatted, it’s a very common source of bugs that message will say this thing should be displayed in currency USD but all of a sudden when you go some other country and then it’s some other currency, the number gets displayed with the wrong currency and bad things happen. So the messageformat recommending in the data model and the fourth is smart units proposal. I annotated that longer term. We don’t have full agreement on whether this is going to go ahead and land. I wanted to put it here. It is also one of the points of motivation for having a separate type because it means that the smart units proposal will be much more narrow in scope. This reached Stage 1 at Tokyo TC39 [2024-10]. We agreed as a committee that this was a problem space worth exploring. So hopefully that answers some of the questions about amount motivation. + +SFC: Another thing that we didn’t really discuss yesterday in the presentation was about what is amount and what is it actually look like? I drew a strawperson example here. On the left side is what you can currently do with an amount. If you have a value and currency, you have something like—let’s say it comes from some external source. You might have something that looks like this and come JSON object from the server or something like that. And then you plug it into the Intl number and this is what you have to do. You split it apart. The currency goes here and the value goes here. If you have precision, that also has to go here. And then you get the formatted value out and hopefully it works. This is error prone. I have evidence it’s error prone. Hopefully that’s pretty obvious to people in this room. On the right is with amount. You trigger amounts and comes from some external source. Now you have an actual amount object that follows the protocol and then you can pass it into Intl NumberFormat directly. There’s no possibility that currencies and values and units and things get out of sync with each other. So this is what I mean when I talk about amount. + +SFC: I also want to talk a lot about scope. I feel like a lot of misunderstanding yesterday about the scope of what I mean when I’m saying amount. The scope that I’m talking about is it’s a data type that represents the following: Numeric value and the precision of the ville and the dimension which could be a currency or a unit of that measure. And it’s being proposed as opaque type. The exact representation of the numeric value and the precision and currency of the unit are questions that the champions will answer. What this committee needs to know this is in the data model and nature of the data model. The exact way they are represented, the discussion we will have in future meetings. Some of the functions it can have definitely in scope, it should have a from or some type of constructor to be able to build it. Also have a to locale strong to use it for formatting. Maybe in scope the ways to get the value out and maybe an equal function and add subtract, maybe, maybe not. I imagine add subtract that actually not make it in and serialization may or may not. You have to build it and use it and format it with the localized string out of it. + +SFC: What I’m definitely not proposing is unit conversion. We’re saying that amount is way too big in scope. I’m proposing it not with unit conversion. This is a natural place that unit conversion could be added if the future. I’m not proposing unit conversion at this point in time. Another question that was raised that I wanted to discuss a little bit about polymorphic amount versus decimal amount. What I mean here is that an amount could be a type base that makes an arbitrary numeric type and number and decimal and BigInt and carries it with precision and dimension. It could be a type that always uses decimal because in order to interact more nicely with the decimal ecosystem. So my proposal for here is in order to basically not make this observable at this point in time and opaque enough to restrict to decimal semantics and have this flexibility moving forward. So basically try points on this question. Now I want to talk about decimal/amount harmony. + +SFC: This is another question that I don’t think got adequately addressed yesterday. I really wanted to have more time to discuss it. Now I have my time to discuss it about why opportunity space that we have if we think about these proposals together with each other in harmony, what are some opportunities we have that we don’t have if we think about them in silos? So if we think about amount by itself as a silo, I think amount is motivated. It still solves problems by itself. This is the thing you might have. You might have constructor called dot from using the Temporal example. That’s a thing we could discuss later. But you have a dot from function and might take a value with thing with the significant digits like this and then you can use your NumberFormat for it and it will work. That’s fine. In fact, this constructor could work in the harmony mode. Again, harmony you can do on the right and still have the decimal and annotate it with things like precision and annotate it then with your dimension and then what you get the other side is amount that you can use for formatting. It’s very explicit what you’re adding to the data model and when. + +SFC: On the right side is explicit. The first step is you project your number into decimal space and then give it the precision and dimension. I think that Temporal has given us a really great example of how this exact pattern can be quite successful at building very, very clean easy to follow and easy to debug programs by having for example you saw with the date and time and TimeZone and zone to date time. And that sort of thing work well. I think that’s an opportunity we have by thinking about Amount and decimal in harmony. It Tuesdays unified framework for JS to deal with numbers. This is a great opportunity to give developers of the language basically in the same way that Temporal solves or radically improves interaction with dates and times, this is a great way for them to improve the options with numbers. And also puts i18n front and centre that I care most deeply about. By putting the data types front and centre it means two locale strings and do localization out of the gate and not developers to split amounts and different places and so on and so forth and puts it right and centre so the right thing is the right thing. + +SFC: What I want to talk about the next and open up the queue to discuss today is about the motivation for these proposals. I say that if we feel that decimal is motivated, if we also feel that amount is motivated, there’s no reason not to make them work nicely together. This is my position. I think this is pretty—this seems pretty obvious to me that if we think both proposals are individually motivated, we should make them work nicely together. + +SFC: This is other page of notes and I can come back if people have questions. And harmony proposal could introduce namespace and I’m not proposing this one way or the other. And Intl name space? Maybe it could. That is a discussion we can have. Someone on that side of the table raised a question about rationals yesterday. Don’t currently have a plan to support that because of prior art and things, I’d rather embrace decimal as the data model and then the fourth the pack semantics of this. There’s intermediate type on the previous slide. I don’t think that it’s a good use of plenty time to discuss it right now. But it will definitely be something that comes up in the champions meeting. + +SFC: This is the primary way I wanted to spend the time together, the remaining of the time box together, is to answer these two key questions. Is decimal ready for Stage 2? Is the amount proposed in the slide deck ready for Stage 2? We have spec text for decimal. The other decimal champions that worked together for the last year done a great job producing the spec text and it’s quite sound and solid. Amounts does not yet have the spec text. I hope to change that. Not next time we meet. But I want to get these questions. What are the remaining concerns we have about the motivations of these two proposals individually and if we can agree that these are both motivated, then we should look at how we can advance them and make them work nicely together? That’s what I would like to discuss today + +KG: Give the example of uses amounts for formatting for Intl format, NumberFormat, can you talk more about what Amount does, what problem it’s solving? + +SFC: This is the problem it solves. It solves the problem of the motivation. I can go back to this slide. Number 2 is an actual real concrete problem that it solves. In order to do things like reason out the plural form of the amount, you need to be able to know what the—you need to be able to know the entire data model of the amount including the number and precision of the unit and in order to reason of the plural form of the amount. + +KG: I’m not convinced that this warrants a new type. I feel like it would be relatively straight forward to just make NumberFormat accept like an object that has a type and amount property and not like put anything in the language except to change to NumberFormat. + +SFC: You’re advocating for a protocol-only approach and not a type approach? + +KG: Yes. + +EAO: Noting that one of the issues—one of the use cases for decimal in particular is that it is solving is that currently we have decimal libraries and user space and when they go through JavaScript and need to communicate between each other using something like a string to present the Number can be problematic for example because of concatenation and having something like Amount would also support this sort of a thing effectively providing JavaScript providing the way to represent numeric value without necessarily having a way to doing anything with the value which is of course what Decimal does. But the ability to represent a numeric value that is not representable by Number is a thing that Amount provides. + +JHD: I think there’s intrinsic value in having authoritative thing that libraries and user code can use to inter operate with. However, I don’t think that on its own should be enough to motivate any addition to the language. I think that should be a sweet bonus that we get with something else. Otherwise, there’s hundreds and hundreds and hundreds of things we should add to the language. Pretty much any time two widely used libraries share an object—you know, share a data structure, sure, let’s add a new global, a new class and type. I’m slippery sloping it a bit. But I don’t think that needs to be sufficient and I can wait until my other queue item before I say more about it. + +SFC: I will reply to that thread which is this that I don’t buy the slippery slope. There is a problem in the i18n space and there’s an opportunity to solve the problem. I think if there’s cases of common data types with the i18n value those should be representable in the language. I think this is one of the cases. There is a limited number of the cases. Temporal answers a very large percentage of them. This is one of the remaining ones that is not answered by the language in terms of objects that are able to represent things that can be localized. + +NRO: This is not just communication that elaborates but one trying to communicate is built in on the language. That makes the difference of trying to communicate with each other. Already we have part of the official way that it should be done. + +MM: Can you go back to the first slide of decimal and harmony. I missed Tuesday morning. My apologies. I want to understand the withSignificantDigits, the thing that that produces is not simply a decimal. It’s a new type which is decimal together with some kind of precision; is that correct? + +SFC: Yes. I annotated that and calls it for the purpose of the slide decimal with precision. I note also two slides, three slides later, the last bullet point and the exact semantics are not decided whether it should exist or be named. There was some disagreement even among the champions about this. I don’t think this is a Stage 2 blocking concern but definitely something we need to discuss. + +MM: So the thing that my question is, what is it about the notion of precision that’s introduced by the with significant digits that is in some way relative to decimal but not to regular quote unquote numbers? + +SFC: The notion of precision could be used for other numeric types. We had an opportunity. I mean, this is me speaking personally. We have an opportunity with decimal given that decimal the IEEE decimal gives us a way to encode precision in the data model that sets decimal apart from the opportunities that we would have with other numeric types. + +MM: So the notion that’s built into IEEE decimal itself is non-precision, it’s not significant digits, it’s not digits after the decimal point, it’s not error bars, it is number of trailing zeros, only zeros. + +SFC: That’s correct. + +MM: As far as I can tell, I know of no use cases for which that is useful. + +SFC: That is useful. + +MM: Instead of 1.0 if the actual numeric valueOf 1.11111, you would render it out all to all available—as deep as was needed to correspond to the underlying precision of the finest precision of the underline representation? + +SFC: I would like to hear Nicolo’s response. + +NRO: Whether you store number of digits or number of significant digits or zeros, regardless that the model represents, they are all equivalent. You can convert them with the—Regardless of whether you store the significant number of digits or number of digits after the dot or the number of zeros, they’re all equivalent representation of the same concept. You can convert between them just based on whatever you’re storing and the valueOf the number. + +MM: Are you suggesting that our decimal precision that it is, for example, number of significant digits or numbers after the decimal point and then we’re enabling an implementation trick and we’re using what IEEE-754 considers to be the number of trailing zeros we’re reinterpreting that aspect of the underlying representation not to mean number of trailing zeros but instead to mean number of significant digits or something? + +NRO: When I convert it, not just like if you need IEEE, there are three trailing zeros here, we can say there are six significant digits, for example. + +MM: And then the rendering that we would do is to extract that from the underlying decimal representation and then interpret it as number of significant digits. + +NRO: Yes. + +MM: And then render it that way rather than rendering it according to the IEEE? + +NRO: Yes. + +MM: That’s interesting. That’s the first justification for this that I heard that makes sense to me. Thank you. + +JHD: So the question I pose is what functionality does amount provide beyond being a built in container for multiple somewhat related values? Waiting, it occurred to me that perhaps an alternative name for this that will simultaneously convey my skepticism and its semantics is Intl NumberFormat options bag factory. It seems like that’s all this is, is it’s just a class for the purposes of wrapping an options bag to pass a NumberFormat. That doesn’t feel sufficient to me. Does it do more stuff that I’m missing besides that and providing an interop point. + +[Slide: https://docs.google.com/presentation/d/1050DHlNOzcN-8LqJQ_6z8j-LryXgEqOcLfcVzkhJyEk/edit#slide=id.g3316773b416_0_5] + +SFC: I’m putting up this scope slide because what I would like amount to become is this thing that also does these other thing that are listed under maybe and future. I’ll remove those from the proposal for now in order to seek consensus given there was skepticism about things like unit conversion being in there. Seems like removing unit conversion make some delegates why do we need this any way? And it’s an interesting point. I’m glad we’re discussing it. There’s a really good opportunity here to have things like serialization of these values, I think that’s quite compelling to have the equality of these values. Not just a NumberFormat factoring, it’s all Intl. Any object that can operate on these, not just NumberFormat. + +USA: Quick point of order. Shane there is around five minutes left. + +SFC: We got started about two minutes late. I would appreciate the extra two minutes if possible. + +USA: Okay. + +DLM: Just like to second JHD’s comment. I agree that what’s being proposed is a potential solution for problems with Intl NumberFormat and messageformat. But I think it’s the only solution. And I would encourage us to investigate other options as well that might not be quite as heavy-handed. + +SFC: Are there any ideas that you have, any specific thing that would be less heavy-handed? + +DLM: I think as JHD mentioned, an options bag would be one solution for Intl NumberFormat use case. + +EAO: This is a little bit more meta. Given that the proposal formally only has one champion who is on leave, and it’s being worked forward within a larger group, I was thinking it might help a bit with a lot of shaping of this if at least Shane and possibly myself could be eventually recognized as champions of this proposal. My interest here is indeed the “opaque amount” level of defining this and further work from there ought to follow on in separate proposals. + +NRO: I think it’s very—with BAN away for a while, we should have different champions. Like to have EOA and Shane and he will have more time. Have the proposal and talk with people about it. I would like it straight from you Shane. + +SFC: I think it would be great for—if Jesse had more time. I definitely see myself as an adviser and put together slides. Point of order. We need note takers. + +DE: This is good. I’m glad we’re getting more support for this proposal. Just to note in general, you don’t need to ask for anybody’s approval for add or remove and champion group can do this. Happy to have the people working on this. + +MF: So it seems like amount is supposed to be covering this really broad association of a unit with some numeric value. Decimal is—I’m supportive of that proposal for the scope it’s supposed to address, but it is not so general that all values with units would be representable in decimal. KG brought up yesterday that it is common for non decimal rational values to have units and be displayed in that way. And you simply cannot store a third as a decimal of 0.3 repeating and have that be the same thing. These should be pursued separately and motivated separately even if they would be asked to work nicely with each other. But I don’t think that we should be limiting Amount to just like Decimals in this way. + +USA: That was your queue. + +SFC: I just want to give anyone else the opportunity to jump on the queue. There were two questions I was asking and one was about decimal and the other on amount. And we focused on the amount that was the newer topic that makes sense if you spend a lot of time there. I want to give anyone else the opportunity to jump on the queue for that. + +SFC: Regarding while—if people are deciding to get on the queue, responding to the point about Rational, I hear you, I think a letter design is able to have Rational support here. I also think that the problem is not—doesn’t have a lot of prior art. And that’s not what I’m proposing at this time. We can talk more about that offline. + +JSL: The way motivation for amount is worded here in the problems to solve, I think decimal definitely feels very well motivated for language amount. To me right now is way more appropriate motivated for Intl and discussions there. Just kind of where I’m feeling right now based on the comment. + +SFC: This was useful discussion. I think we’re just about out of time. And the champions will definitely continue to, you know, explore can this work with the protocol only approach and then consider it. Okay. + +USA: Thanks Shane and everybody else for participating in the discussion. We cut it very close. That’s great. + +### Speaker's Summary of Key Points + +* Some concerns about Amount being motivated if its only use case is Intl +* Requests to explore a protocol-based approach +* Question involving the representation of precision +* No delegate raised concerns about Decimal motivation + +## Continuation: `Number.isSafeNumeric` + +Presenter: ZiJian Liu (LIU) + +* [proposal](https://github.com/Lxxyx/proposal-number-is-safe-numeric/issues/4) +* [slides](https://docs.google.com/presentation/d/1Noxi5L0jnikYce1h7X67FnjMUbkBQAcMDNafkM7bF4A/edit) + +LIU: Yes. I’m going to start. Here is the problem statement for `Number.isSafeNumeric`. Just before last presentation, I received a lot of feedback and thanks to everyone. Here is a progress statement. The first slide has changes from the last presentation. Here are five changes we made. The first is clarify the motivation and the real problem. The second we remove strict format rules by default and align with ECMAScript and StringNumericLiteral format. The first is to remove `Number.MAX_SAFE_INTEGER` limits for value safety and add identification for unsafe numeric string and we have more questions for changes and feedbacks to look at GitHub issues. + +LIU: Here I will start the motivation part. Currently we have focus on real motivation. It’s the string to number conversation may lose its original precision and integrity. Most developers are not aware of this problem because this problem exists for stack overflow and can represent everywhere. I think this is a potential risk for apps. And third is no reliable method to detect precision loss. Just compare with string value is affect by different problem. So we think we should provide a built in method and help developers to avoid this problem earlier and choose right parsing method. And the problem we are facing: The first is cross system value mismatch. In Alibaba built a mobile api gateway called MTOP calling HTTP API need use JS SDK and opener and go through after back ends and we have 100,000+ API and 200,000 backend serving and more than 1 billion calls per day. The problem is Java has the Long type whose numeric range may exceed what JavaScript Number can represent. And we have to convert all numbers from the back end to strings in the gateway and we have numeric value but in gate way we have to transform in the string due to the case. And in JavaScript every developer need toString number conversation manually and the bugs produced by precision loss happens every day. So every day I receive many new questions just for this value and error number to back end or whatever happens. So it’s the problem we’re facing every day. + +LIU: And the second is sheets use decimal.js everywhere. DingTalk—you can think of them like Google Sheets—allow users to create table and numeric values stored as strings. And when they’re displayed to user or doing subsequent operations, like formula calculations, engineering teams need to do string-to-number conversation. Just because the string to number conversation may be precision loss, so the DingTalk sheets engineering team have to use decimal.js everywhere. For viewing the table, decimal.js add extra JavaScript bundle size and slow down for the first screen performance because we have to—we must load decimal.js first. So if a `Number.isSafeNumeric` method exist, for many cases, decimal.js is optional and dynamically. + +LIU: The definition of `Number.isSafeNumeric` is not updated. For ECMAScript StringNumericLiteral format. For 123 and leading decimal points or trailing decimal points are accepted and the invalid for null and undefined and some format. So I think this makes it easier to not write duplicate code. + +LIU: The next is value safety. The update validates the real-number value of the numeric string, and retains its original precision and integrity after being converted to a JavaScript number. Here I just list some examples. For 123 or numbers fall and max integer and smaller and max integer converted to JavaScript number and then using its original precision. Just for some floating numbers. And the examples, it’s where I tried to convert string to number, the numeric type is changed. This is invalid case. + +LIU: And I just updated identification for how to define unsafe. In ECMAScript the number toString for nodes and if X is any number value other than negative zero, then `ToNumber(str)` is `x`. List definition if `x` is JavaScript number must be numeric string generated by toString. The `ToNumber(str)` must be `x`. And in our definition, what is unsafe? Unsafe means if it is numeric string and `ToNumber(str)` is code due to IEEE-754 double precision limits and ECMAScript round result, the significant digits of `str` are modified in conversation, it means `str` doesn’t have an exact representation of `x` due to modified significant digits. It can be converted back to the name numeric string that means unsafe. Here is a formula that I had before. It is just the formula and not implementation. + +LIU: And also there is waiting for discussions. Better name for `Number.isSafeNumeric`? Currently we call `Number.isSafeNumeric` and we see the list can be represented with a better name. But also some people may make a better new name for the same behavior. And a new name like after double parse double input. This is waiting for discussion. That’s all. Any questions?. + +USA: There is a long queue. Unfortunately not a whole lot of time left. But then I don’t know, if we can do a continuation of a continuation. But anyway, let’s start with the queue in the meantime. First there is NRO. + +NRO: Thanks for presenting this again. I find motivation clear now. I believe the motivation is not actually about how numbers are represented as a float but just whether string contain a number is still not round trip and mean to humans when going through the float which I believe is also probably answers KG’s question which is specifically about how floats internally are represented. I have a question for you that is do you think if you had Decimal as a built-in in the language, this proposal, would it still be useful? To be clear, I don’t think we should clock any proposal going for Stage 1 based on another Stage 1 proposal? If Decimal proceeds, do you feel like this proposal would still be motivated? + +LIU: Yes. I think even if Decimal is still useful because number is numeric just for the purpose of number is numeric is going to validation and when is written, it means you should choose a better parsing method. For you maybe BigInt and maybe use decimal.JS or proposal decimal and proposing decimal can solve almost the questions about that. But the validation method is simple and easier to check if a numeric string is valid, is still necessary. + +NRO: My question is because I was thinking you would just not use floats. You would always use decimals if the numbers are not floats. I have a second topic in the queue for the committee. I heard yesterday some interest from delegates giving a presentation to how floats work. When we discuss this topic, we find we talk past each other because we have a different understanding how floats are. I’m not volunteering for this. I strongly encourage someone to volunteer to prepare this presentation. + +KG: So in your examples, you didn’t include any strings which represent the exact decimal value of a float. For example, you have on the left 123.5678. The exact decimal number of the JavaScript number is some 40 or 50 digit abomination. If I wrote out that 50-digit number and 123.5678000079, whatever it is, would that string be accepted? + +LIU: I know your problem. You mean if a numeric string contains more than 40, 50 digits, it can be accepted? + +KG: Not just like a general string, but if the string is in fact the exact decimal representation of a JavaScript number, not the representation that you would see when you call toString by the exact decimal representation of the floating point number that JavaScript actually has internally, should that string be accepted? + +LIU: I think – + +NRO: I think I have an answer. And I think the answer is not the string should be rejected because then when you—the difficult conversion of the number to the string does not give you something that a human would read and say, okay, this is exactly the specific float value that was represented in binary format. + +KG: So the point is not just that when you parse it to a number, it is still represents the same value, but when you parse it to a number and then serialize back to the string, the resulting string represents the same value? + +SFC: Next on the queue. I discussed this a bit to pin down a little bit more exactly what the formula is. I made an issue https://github.com/Lxxyx/proposal-number-is-safe-numeric/issues/3 with the suggestion on the proposal repo that you can look at off line. + +KG: Can you put a link in the Matrix? + +SFC: I can put it in the Matrix. + +PFC: Thank you very much for coming up with the problem statement and illustrated use cases. I found that very clear. I support Stage 1. I had some suggestions but you can skip the rest of my topic. I will post those on the issue tracker. + +SYG: So the spreadsheet use case seems strange to me in particular the use case of if you can represent an input string as a float, then as you use it as a representation. But does that mean that you are also—like, it seems to imply to me that if your initial input happens to be representability as a float64, then you are opting into IEEE and operating on the number and, if it is not representable, you are opting into decimal arithmetic. Those are different worlds and seems world to me to make that decision on the initial input string. Like, why do that? + +LIU: Because when last year we proposed Decimal, this is my first time to participate in TC39, the problem that I bring from DingTalk sheets is that I want to use Decimal to solve the decimal.js problem. After one year they have new—they found something that can be easily solved by number and numeric. And when you use numeric string, maybe decimal or maybe any other. So we think if we can replace some simple operations by just using number or number type or just JavaScript or types, it don’t rely on decimal.js, it’s a benefit for all system of capture and for the server proposed decimal comes to we write the code to use the proposed Decimal for less JavaScript on the side. So I mean it’s a progressive so you always choose the new technology that can help run better code. + +SYG: What I’m saying is if you know the representation is represent—if the representation is safe, why is `Number.isSafeNumeric`—it doesn’t say anything about the operations and arithmetic and other formulas you want to do on the number later, you could still accumulate errors if you keep doing IEEE 754 arithmetic. Why is it okay to—it seems like a different kind of—like, why did the engineering team decide that it’s not safe everywhere, right? It doesn’t give you the safety property it seems like you actually want. + +LIU: The engineer team said if the `Number.isSafeNumeric` method exists, it does exist, choose the problem. For many cases, let’s say it’s just like read only state or just read table or just shows on static or anything, decimal.js is optional and for some formula calculations or precision loss, they can load in decimal.js dynamically when they need it. + +SYG: How do they know for particular operations if they need to load decimal.js or not? + +LIU: Just for any excel, it has a define of precise sharing loss problem and means if the double is 15 significant digits are called precision loss. In this case, a number of 15 significant digits, they choose to use decimal.js and they use it for simply dividing it. + +SYG: I will drop it there. I find it unconvincing, the safety stops just at the parsing. There is no kind of transitive safety property. Seems like the wrong kind of architecture. I would not recommend `Number.isSafeNumeric` for this use case. For storing and serializing JavaScript numbers I would recommend [?] but in this case I find it strange. + +LIU: This case is just people who do not—this is just do not want to log extra library and they like to display—they don’t like to display wrong values. So if there is a list, this helps a lot. But if you mean errors, this must be solved by next proposed decimal or use some that is more library. This is engineering feedback. + +KKL: Thank you to SYG for drilling in on that. In particular I agree if the range of expressibility values for a particular number are not—that if you do not really have a choice of whether to use the decimal if the range of expressible values can only be captured by decimal, I don’t think it is sound engineering. For my experience Uber has a JavaScript API gateway that received traffic and disseminates to Java, Go, and Python services and runs to—I’m extremely sympathetic with my experience to the problem that you’re having, I think that if I were to try to capture my own understanding of what the problem statement is in that domain, it would be that JavaScript—because it does not have identical numeric types to these other languages, that it is sometimes—it is necessary in many cases to resort to using the string to capture for example on 64 byte integer and datetime stamps and nanosecond-resolution time stamps and of course decimal as well. But I think that I would say that a solution to this problem would be of the form of JavaScript APIs for recognizing whether a string can be safely captured in a corresponding JavaScript type or returned to the string format for that type. I would expect a solution in that space to not be a single `Number.isSafeNumeric` method but a range of methods pertaining to specific numeric domains specifically strings that capture int64 strings, and strings that capture decimal, strings that are—but not strings that capture float64 since JSON can handle that particular case fine. So my hypothesis is that the problem statement is that we need to find a way to improve JavaScript’s ability to recognize value ranges like int64 and decimal that don’t have native representations so that applications have a more clear APIs for interacting with those values without loss of precision when translating them in and out of local representations and I submit to you that that might be something that we could make progress on in Stage 1. + +LIU: Yes, thank you. I already consider with the proposal, I think maybe a group of method can help but I just choose one method because I do not know if a set of methods is too much or we should just use this. So I just brought one method and thank you for the feedback. I know how to do it next. + +USA: Next we have a response from SYG where he says I find Chris’ serialization use case compelling. That was the entire queue. Would you like to ask for consensus again more formally? Let’s give a minute. I think we already heard a few folks mention that they were—happy to support Stage 1. But just to be clear, let’s – + +GCL: I do not feel comfortable with Stage 1 at this time. It seems like there’s still a lot of unanswered questions about what the motivation here is. I think what Chris suggested is interested but is a fundamentally different proposal. And there’s also an active issue for this proposal about the motivation that is still going and I would like to see that go somewhere before we move further with this. + +USA: I’m sorry. Who is that? Could you add yourself to the queue? + +GCL: This is GCL. + +USA: Apologies. On the queue, we have Shane, then. + +SFC: I still think this is strongly motivated for Stage 1. And I think that the presentation illustrates some of those points. I think being able to reason about what is safe to serialize back and forth between the different data types is definitely a problem I experienced and a problem that I have seen others give themselves on and perhaps don’t necessarily agree what is a safe number and ask “please explain more”—means it is a complicated problem. The problem space is fairly clear, you know, the language needs—maybe the language needs some mechanism available in order to make this determination and the language currently doesn’t have a mechanism and if it does, we can support Stage 1. I definitely think this is the problem space is motivated. + +USA: Next on the queue, we have a response from Ken. + +KG: Shane, if you think it is not motivated, can you state what it is. I am still struggling to understand what it is. + +SFC: Absolutely. You have a numeric like thing that is represented of a string and take the numeric like thing representation of a number and want to do so without losing any precision of the string and determine if it is safe to do that given that the space of numbers that represent both the number and smaller in the space than is represented more in the string. + +KG: Does it mean to lose precision? + +SFC: The value as projected in the decimal space changes across the operation. + +KG: Rejecting 0.1. + +SFC: 0.1 would be retained because projected the number back in the string space retains the original valueOf 0.1. + +KG: Not representing it as the number loses precision but representing the resulting number as a string as precision? + +SFC: Mostly, yeah. I wrote in an issue about the exact like more formal definition of that, but yes. + +KG: Okay. I’m okay going forward with the problem statement of “I want to know if the string can round trip—can preserve its mathematical value round tripping through the number[?]”. That seems like a reasonable problem statement. + +MF: That was the first—that description right there is the first time that I understood a problem statement for this proposal. It’s possible that that would make me okay with going to Stage 1 for it. I would like to see why it’s useful to know why you don’t like to change the mathematical value like a string representation of a float that’s derived from the string. But if we could do that, then I would be okay with Stage 1. Sorry that this was weird, but my opinion is changing on the spot. + +SYG: I’m unconvinced for Stage 1 at this time. I think we moved pretty far from the initial—like, we zoom out, I think we have moved out from the initial motivation from LIU’s presentation yesterday about validation and about this. And I feel more unconvinced because one of the motivations here is this unsound architecture for doing arithmetic and numerical operations on decimals or floats. So I want us to have a tighter formulation here. Like, I think as a committee we have backformed something that we can understand and sounds reasonable to convince ourselves that we can go to Stage 1. I’m not at all convinced. That’s the problem that LIU is trying to solve and I don’t want to explore something that is the problem that they don’t want to solve and give them a thing to solve in an unsolved way that I’m worried about with the spreadsheet use case. + +MLS: I have similar concerns. The motivating example here was input validation and it’s not going to help with the—you input it correctly, but it doesn’t help with further calculations. KRIS did talk about being able to send data between applications, and I think there could be the use for that. But I think we could go to Stage 1 but I think there needs to be a lot more work as to why this is—should be. + +NRO: I think the motivation is not clear. And I don’t think it changed since yesterday exactly the example with mathematical variables in the slides yesterday. Maybe to help the committee would be useful to have more examples how to use this. Like, actual code examples of maybe having in your software where you show what it would need to do if the check passes this other thing and then you leave knowing what the other thing would do would help with communication in this proposal.. But I’m finding it motivating enough for Stage 1. + +SFC: To add to the question of why would you want to do this: I mean, the simple answer is that f64s are more compact in general than strings. And often times when you’re storing in a database or something or sending over the wire, you want to send something as an f64 because it is a more compact representation. You want to be able to verify that the decimal value in your string is able to be round tripped through the compact floating point of view. I think that that is why you run the operation. I think the operation is sound and some problems need to solve and that’s why the operation matters. + +SFC: I think I’m next on the queue. I think someone asked earlier doesn’t Decimal solve this? I think Decimal does maybe solve it. I think Decimal still does have a limited precision but it is able to represent—it could be considered maybe a better vehicle than f64 when you’re trying to serialize the strings to numeric type and might want to reach for decimal instead. I think there’s still room to explore how this problem could be solved in a Decimal state. I still think the problem space is motivated enough for an exploration phase for Stage 1. + +MF: I’m now starting to understand this proposal is more about representation of a float as a string to a user. It seems like then this proposal will bring into scope some of the looseness we have about that representation currently. Assuming that we can fully define that space, that should be okay. It is also a bit weird that we sometimes represent floats not in decimal notation but scientific notation. That’s an arbitrary decision because we chose a certain number of digits that we thought would be okay like 30 years ago or whatever. And that really has nothing to do with this. That’s kind of a bit weird. I think it will be a stumbling point for this proposal. That would be stuff that could be investigated during Stage 1. At the moment I’m not opposed to Stage 1. + +USA: Oh, you aren’t. I believe there’s still people who are opposed to Stage 1? Can we clarify that to see if there’s any path forward for Stage 1 in this meeting. + +USA: Yes. GCL you’re on the queue. + +GCL: I think sort of like MLS, I have heard some alternative problem statements that I think make a lot more sense to me than what has been presented so far. And if we were to iterate on those before coming back to make that what the Stage 1 consensus is asking for, that would be probably—like, I think I could see that being acceptable. + +MSL: I’m not going to block Stage 1, but I think the motivation here is fairly thin. I think there’s some issues with what this API wants to promise. But I think that the bar for Stage 2 is going to be much higher unless there’s significant change, I don’t see it advancing past Stage 2. + +SYG: I’m still uncomfortable with Stage 1. Like I said previously I find KKL’s motivation in the problem statement clear and compelling and happy to explore that. But I don’t want us to give Stage 1 to a proposal by coming up with a problem statement that we came up with for the champion. Like, if that is—if we independently reach that point, that feels like a different proposal to me. We should do that. And go through Stage 1 for that proposal instead of saying, “Oh, actually this proposal’s problem statement is this thing,” and then we advance Stage 1 for that. + +USA: I see. So to be clear, SYG, would you withhold consensus for Stage 1 at this moment? + +SYG: I would. + +USA: Okay. All right, then, we don’t have consensus for stage advancement, but for the next time this comes to the committee, I would implore the champions to engage with every one that participated today and others in the committee and I think you could—you heard a lot of statements of support. So I think this could go to Stage 1 at a later meeting. Thank you LIU. + +CDA: I was on the queue with a reply quickly. So I just wanted to state for LIU that it sounds like there is a path and folks are uncomfortable because they’re not seeing the unified vision of what the problem statement is, and so if you guys could nail that down between now and, you know, next plenary, then there’s potentially a path forward for this to advance to Stage 1. + +LIU: Thank you, everyone. + +USA: Thank you. Would you like to go to the notes and add a summary of the discussion that happened earlier. + +LIU: Yes, thank you. + +RPR: Specifically, maybe GCL and MSL might be able to contribute to the consensus summary. + +### Speaker's Summary of Key Points + +* Clarify the motivation and real problem +* Upgrade for value safety definition + +### Conclusion + +* withhold consensus for Stage 1 + +## Language design goal for consensus: Things should layer + +Presenter: Daniel Ehrenberg (DE) + +* [slides](https://docs.google.com/presentation/d/1Nj6E1h0SeyDGI3e8BQlATQeX-l6x4Jx7uGAM8XimfIM/edit#slide=id.g329dc435965_0_344) + +DE: I want to talk about a potential language design goal which is that things should link. Here on the slides is a beautiful layer cake to illustrate that concept. The idea here is to bring language design goals explicitly to the committee for consensus so that we can establish a kind of a shared basis for doing design. This is something that YST proposed that we do some years ago. I think it’s a great idea. + +DE: Concretely, the proposed idea here is that things should layer. You have sugar on top of core layering capabilities and layering, and most features are syntactic sugar. It could be a transpiler or npm module and that is layered on top and some are known and can be layered on top otherwise. + +DE: I’m not talking about the JSSugar/JS0 made in previous meetings. That’s a separate conversation. I’m just talking about the single language JavaScript/Ecmascript that we currently define in TC39 should have a layering within it. More like a logical editorial layering rather than necessarily two languages. That’s a separate conversation. But still, it was good that that was raised because that gets at some of the underlying design points that are important to discuss regardless of whether it’s one language or two. + +DE: So a question is, when should capabilities be added? I think the answer is when the capability is really the goal of the proposal. So an example is Temporal. Temporal adds two capabilities. It adds this higher precision access to `Temporal.Now`, the current datetime, it also adds access to the TimeZone database that the browser has, that JavaScript has. But most of Temporal could’ve been implemented as a library, as “sugar”, without any new capabilities. So Temporal has both sugar components and underlying capability components. + +DE: Another example of capability that is maybe a little bit ambiguous is the temporal dead zone (TDZ) where `let` and `const`, the variable that defines throws and access before the definition is reached. This implies a new capability that is implicitly to efficiently perform this check which actually no one consistently succeeds at. When this feature was being designed it was kind of assumed that it would be possible for engines to optimize it out and we previously heard a presentation by SYG about the possibility of eliminating these checks or at least in certain cases. Is this a core goal of let and const features? I’m not really sure. I think the lexical scoping part might be more core, but the question of whether TDZ is core could go either way. + +DE: There are other cases where capabilities would really be pretty accidental. And one of these, again, I’m stretching the meaning of the term “capability”, but this is kind of core to the argument, `Map.prototype.getOrInsertComputed`. This was a proposal from KG that this is coupled with the check to make sure that things didn’t go wrong with the structure during the call back. And effectively, even though this could be polyfilled, it requires kind of taking over things. It’s kind of a capability. We decided, no, this is not worth it. It’s not the core goal of the proposal and adds extra complexity. + +DE: The other one I would call out is match where the match proposal includes currently a caching mechanism to make sure that properties and iterators aren’t read multiple times in multiple match statements that implies a new engine capability to magically make this sufficient. So it implies some capability to not actually create the map, but still do the optimization. + +DE: So when someone expects the JIT to have a new ability to make things fast, that corresponds to a capability. But engines just aren’t magic. I’d actually say that build tools and bytecode interpreters have similar constraints and similar optimization capabilities in the general case. Sometimes they’re able to optimize things, but you really don’t want to have to rely on them. + +DE: They’re not magic. Both systems aim for spec conformance. So some limits of build tools although some build tools can operate on the whole program, many of them operate on a per file basis. So they don’t have access to, you know, cross-module analysis. Often they’re working just within a particular function. But not always. Semantics of build tools are simple and deterministic. The semantics that they ascribe to JavaScript at least when you want to have the things that are supported across everything. When there are optimizations, they are mostly local and about preserving semantics, not creating—not changing the—not giving the statements meaning. Also meaning build tools are poorly funded, so it would be difficult for them to maintain a higher degree of complexity. So they have to operate at this simpler local level and they have to conform to the semantics. Bytecode interpreters need to do the same thing. + +DE: JavaScript engines these days, at least the ones in web browsers, tend to be based on bytecode interpreters. There are some JavaScript implementations that are not of that form, but at least this is one environment that we have to make sure that the language works well in. So generation of bytecode is file-by-file or function-by-function. So it’s somewhat fine-grained. You know, pre-parsing makes it finer grains. It also cannot rely on this broader analysis. Even within that unit of granularity, it has to be fast and simple when you’re doing bytecode you don’t know with it and get complex analysis and have further executions and have them trigger JIT. And to get semantics it has to be possible to do that locally. Bytecode interpreters need to support all of the language, and we don’t want to go to some complex tier because some different language was used. Another reason that simplicity is important because more bytecodes mean more complexity downstream of the JIT and just more things to implement. This implies to me that syntax features should, when possible, desugar easily into the efficient JS and not rely on intelligence from either build tools or bytecode interpreters. + +DE: And this leads the two possible statements for consensus; encouraging that things should layer and when capabilities are not the primary goal, we should do things that can be implemented in terms of other things. So for libraries, this is one possible wording that I haven’t wordsmithed it much and would be interested in your input. For libraries, you know, library features should by default be implementable accurately in JavaScript given the assumption of an original built-in environment, unless the goal is new capability. If a new capability is exposed this should be deliberate and well-understood. For syntax, by default should be expressible via desugaring into existing JavaScript syntax features and are completely accurate. And where desugaring is not possible we should understand that the benefit of this aspect of the semantics is worth its cost in terms of complexity for developer mental model and implementations. So I think we’ve mostly been designing in alignment with these principles but somehow it’s felt a little bit out-of-scope to argue for them directly. + +DE: Sometimes, discussions in TC39 proceed with the understanding that we shouldn’t spend too much thinking about the tooling implementations because later there will be the native implementations. We’ve been using that argument for a while. In the JSSugar and JS0 implementation that was flipped on its head where it was raised that, you know, we shouldn’t put things in native engines because tooling could potentially do much more complex and advanced things. I don’t think either of these are true. We should go for features simple and possible and simple would be they would layer on top of other things. So thank you. + +SYG: As a quibble for the previous slide or slide 6 to think about bytecode, you know my position on this, but I want to highlight for the room that from my point of view, the limits of—I agree with your characterization of the limits of bytecode interpreters that even the JITs and certainly bytecode interpreters are not magic, and from my point of view they are externally imposed by basically performance incentives that all the browsers want to be fast in particular want to be fast in loading. Web pages because Web pages are these ephemeral things and not long-running applications and exceptions exist, of course. Because of that, everything has to be—anything we do, any optimizations we do have to pay for themselves. If they don’t pay for themselves end-to-end to some naive parsing and execution, why would you optimize it? + +SYG: That throws a whole class of optimizations and analyses out the window because of the pressure to compete on loading performance or else we lose users, et cetera, et cetera. Some of that I agree totally also applies to tools. A lot compete on the building, the actual running performance of the tools themselves. I hear people complain about some bundlers being slower than others and that’s the reason to switch to another bundler. It feels to me that that space is more open on the—in the ahead-of-time tooling space. + +SYG: It is not externally imposed and tools could have a different goal of trading the performance of running itself with generating better code, smaller code, more optimized code or something. I understand that is not a space that a lot of the tools in the JS tooling space compete on, but it doesn’t seem like there’s the same kind of external pressure. If nothing else, this is what I see in every other AOT language space, right, that there’s a reason why a clang have and GCC has O1, O2 and O3 and use them for different use cases and not always competing on generating code the fastest. Sometimes you want to take the time to generate the most efficient code. Whereas we never really have that full luxury in the browser engine. + +DE: It’s not always about execution time. There’s also the complexity of building and maintaining these systems and the need to compose them. I think we should consult the authors and maintainer of these tools when understanding what they could do in the future or someone has a new tooling effort that they’re going to spit out from different groups. We could work with them. These tools that are more advanced in the way that you’re alluding to, until they start existing, we should maybe considering the existing things. Regardless, I argue we shouldn’t just add features that work for either of these two cases. We could continue the queue. Somebody running the queue or am I supposed to be running through it? + +NRO: Just every time I hear people mention how much JS code help, it is very difficult to do that because of JavaScript. People with browsers know this. You have the JIT and just assume this function will take a number. But then you need to bail out in some cases and go back to like the original bytecode. And cools cannot just bail out because the code is there, they cannot just load some different version of the code which means that the reason there are no tools available to these advanced optimizations even though people tried like I’m thinking of the—it’s not possible because of that unless you restrict what JavaScript your user can do. But it is some subset of it. + +DE: Maybe the Closure compiler is another example of – + +NRO: Closure has a lot of restrictions of what JavaScript can provide.. + +SYG: As a quick response to that, it is true that something typed or directed is generally infeasible as AOT optimization in JS but there are like we have seen in the innovation in the AOT optimization space that are impactful like tree shaking. That is not something that engines can do. + +DE: Tree shaking is great and an example of optimization that doesn’t change semantics. That’s what I have as my third point under the limits of build tools. If we add language features where to get the right semantics, you need to do some advanced analysis, that’s completely different from an optimization that doesn’t change semantics, that you do some more optional analysis on. + +ACE: I think tree shaking is great because it shows the power of when parts of the language are static where that really helps tooling. Yes, tools can also tree shake common JS but generally a lot of the offers enjoy tree shaking and more because there’s just not more static guarantees similar to word presenting the records and tuples, like syntax there was providing static guarantees. I think that’s very different. So I think not to pick on a proposal, but to pick on one, when we talked about pattern matching or any of those that add symbol protocols, symbol protocols are kind of the opposite of that in that even if you can see the class and see the adds the symbol, you don’t know if that method will be monkey batched with the completely different optimization unless you can be sure that the prototype is frozen, that there’s no static. It may be difficult that the prototype is frozen. I think there’s very—we’re not doing things in general, but there’s a big difference between the static parts of the language and the dynamic parts when it comes to tooling. + +DE: Let’s have more statically analyzable things when it works out for the design of the thing that we’re working on. That’s another possible goal that we could document. + +JSC: Just a quick question on the last slide’s statements for consensus for libraries. Scope-wise, the library features, you’re talking about standard built-in? + +DE: Yeah, sorry. That’s referring to the built-in functions and classes in ECMA-262. + +KG: So for the syntax statement, I have I guess two quibbles. The first that I’m not at all sure by default syntax should be sugar. I think that actually exposing new capabilities is one of the best reasons to add syntax. I’d like the bar for adding new syntax to be pretty high and sugar doesn’t usually meet it whereas new capabilities are most likely to meet the bar for being worth doing. So I’m not sure at all that I like to say that default—we should by default assume that syntax features should be sugar. A second quibble is that for the second sentence there, I think even things that are desugarrable, we ought to understand them to have cost in terms of complexity for the developer mental model and to some extent implementations and that is true whether or not it exposes a new capability like JSSugar. + +NRO: When we talk about syntax sugar, do we consider for example the using proposal or – + +DE: We consider that to be just – + +NRO: That’s an example of something that is very easy to transpile. + +DE: So I think there’s some kind of intersection of the statement that you’re making and the statement that I’m making that would be valid. Basically when we add new features, they shouldn’t – we should not add new features with random edge cases that make it harder to desugar that doesn’t have the case for that. That’s what I’m trying to say and don’t need to affirmatively state either way ant the valueOf sugar features. + +KG: That sounds good to me. + +DE: And using is tiny new HTZ and make it better, I would say no. + +KG: It does. Have you been following the discussion about classes and switch statements? + +DE: I wasn’t sure how to treat that in this presentation so I didn’t mention it, the details. That’s why I mention either past or future things. Let’s consider that to be a valid counterargument to those edge case semantics. It shouldn’t be that we say, well, you know, it can be done correctly in an engine and it’s fine. It’s an advantage if it’s more easily desugarrable. + +KG: Yes. I definitely am willing to sign on a to statement for the future whose primarily end is not the introduction of a new capability it is best if it is pure sugar instead of sugar with some edge cases + +DE: Great. So I think this leads to a clean refactoring of both of the statements or like when a feature is not motivated by adding a new capability, it should be expressible via desugaring with the existing JavaScript syntax features or otherwise the benefit needs to be understood to be worth the cost. I think that refactoring could be done for both of those statements. + +KG: That sounds good to me. I think that we were talking about composite keys like speaking of modifying the semantics after every map object. + +DE: Right. + +KG: That is not a case of clearly desugaring and modifies existing things. + +DE: It’s a new capability. I would consider these new capabilities, you know, even in these cases where it’s like, you know, you can express this in JavaScript. If you can’t express it well enough in JavaScript or well enough for the first one means you have to replace all of these existing things. For match well enough has to do with having to instantiate this extra map. So it’s not just about whether during during completeness-wise you could express it or whether it – + +KG: Okay. With the understanding that something like composite keys would be a new capability and evaluating it on the basis of whether that is—whether the cost of the new capability is worth it in terms of developmental model and things that are not intended to be new capabilities, then we ought to ensure they are pure sugar. + +DE: Yeah. + +KG: I’m willing to sign on to such a statement. + +DE: Awesome. + +MM: So first of all, let me just mention that this conversation just now between KG and DE actually covered very well most of what I had to say. So I’m very much on board with all of that. I think the way to think about this is that everything is a trade off. This is not making any hard and fast new rule. What it’s doing is making explicit a certain additional preference ordering to take into account in making these trade offs. And in particular, what it’s saying is really demote substantially anything that is accidentally not desugarrable and that if it’s anything other than sugar, both—I’m phrasing it in syntax terms but it actually covers both. Anything that is not decomposable to the existing language should have good reasons for not being decomposable with the existing language. Now, I want to refine a bit the nature of the preference order. So desugaring can be more or less syntactically local. And I would add to the preference ordering that desugarings that are more syntactically local are preferred to ones that need a less local transformation of the syntax. I’ll give two examples. Generators, async functions and async generators are local to the function they occur in, basically doing a CPS, equivalent to the CPS transfer of the function they occur in, but unlike cooperative concurrency stacks, they do not cause a general CPS transformation to be even thought of as a gedankenexperiment for the program as a whole. A further less local transformations top-level wait that causes transformation of the module as a whole, the top-level of the module as a whole. And in both cases, I don’t expect implementations to implement it via desugaring. That’s another dimension of the preference order which is there’s two motivations for not accidentally defining something that cannot be desugared. One is efficiency. And the other one is making the fundamental semantics of the language more complicated. Because async functions and generators and top-level await can be desugared, even if for implementation efficiency reasons nobody would implement it that way, the fact that it can be means that there’s a certain level of fundamental semantics of the language that is not being changed by those concepts. + +JHD: I think I will say the top-level wait can be desugared in that way. + +DE: I agree that top-level wait can be read in both ways. It becomes a little philosophical. Ultimately this wording could be used as an argument if we were considering it again against top-level wait because the argument would go most people need further entry point and putting it in the nested module and that is for Bloomberg and reason for new capability. It does really change not how the module graph works – + +MM: You’re correct. I will make it a counter factual example. I think you understand the nature of the example. + +DE: I agree with all of your points. If anyone is familiar with theoretical linguistics, optimality is having the constraints and to the thing that is most optimal with that order. And I’m not sure if we should structure our thoughts in committee that way but a great way to think about it. + +MM: I think it’s worth re-emphasizing something that I think you and KG agreed to which is having avoided accidental non-desugarrability, the requirements especially on syntax for both sides of the dichotomy things that can be desugared or things for good reasons cannot, the bar on both should be very high but for different reasons. Neither one is to be preferred over the other, which is both of them are to be preferred over something that can’t be desugared for accidental reasons. + +SYG: I wanted to call out that there is attention—even though I’m very supportive of the direction that it would be nice to have standard libraries that are actually specified as literal JavaScript, that comes with a bunch of hooks, because that’s just how JS works and that is often in tension with optimizability and making things fast because more hooks means fewer guarantees. All the usual reasons. So maybe that’s reason enough that if we try to do this direction of design, that it’s reason enough to motivate more language features for robust code. Things like the motivation for getoriginals even though getoriginals is problematic for different reasons. But this is the same reason why if you look at the browser engines we all have weird little DSLs for writing built-ins even if they are self-hosted with JavaScript. If the minute you self-host you will discover that it is not a good idea unless you basically make a different DSL that looks kind of like JS. + +DE: I agree it would be valuable if someone were able to solve the problem. I want to ask for conditional consensus on these statements with the reordering that we would resolve in more detail offline about—rather than saying by default, we’re instead saying, that, you know, for features that don’t add capabilities and then conversely when we add capabilities, it’s for reasons—so with the group of people who wants to participate in the consensus—on like the wording details, you could raise your hand or speak up later in the issue tracker and we could develop this online. Would people be up for that conditional consensus? + +KG: Quick response to SYG. When I say desugarrable to JavaScript, I basically mean desugarrable to JavaScript assuming no one messed with the built-ins. I think that’s how most users understand it and how we should understand it to mean here. + +DE: That’s what I wrote also. + +KG: Where is the comment about not messing with built ins? Given the original built in environment. + +DE: SYG clarification question. + +SYG: So one thing that on the syntax side here, less so by the libraries, on the syntax side, it does not say anything about that whether something ought to be in a native engine implementation or in a tool. I don’t disagree with the general design of the statement for how we should design features, but are you implying as part of consensus here that by layering things this way then they ought to also meet the bar to be implemented natively in engines? + +DE: We don’t currently define multiple languages. At the point when we get consensus to have multiple languages standardized by JavaScript, I think we could consider such questions. This is pertaining to the single language that we standardize. + +SYG: I see. + +DE: If engines are not going to add something, it won’t become part of the language standard. That’s our current practice. And we could consider other presentations about other proposals about changing that. + +SYG: So I want to be very clear that if I give consensus to that statement about syntax, that if we design a syntax feature that is pure desugaring, that significantly lowers the likelihood that we would want to support it natively in engines. I don’t think that’s a bad design but that lowers – that kind of anticipates my bringing up again of two different languages. + +DE: Great. I look forward to that future discussion. Under our current process, if engines refuse to implement something, it will not become part of the language. So I look forward to that future discussion. + +DE: Could we have just five more minutes to go over the rest of the queue? + +CDR: There’s only two items. One is end of message. MM says +1 on general direction and holding back on current wording. + +DE: I would like to ask is this something that you want to iterate on the wording offline or confirm it at a future plenary? + +MM: Iterating offline is fine. + +NRO: What does it mean for us to have consensus on the two things? We have consensus when we advance the proposal, would we have a check saying of advancement and match this with the consensus process part? + +DE: I don’t think we need more checklist items. Instead, this statement is an admissible argument next time something comes up for discussion. If someone feels a relevant point to bring up, they can say remember we agreed on this design goal. This is the reference point. I think this applies to – + +NRO: It is setting a precedent in some way? + +DE: I don’t like that wording, but sure. Do we have conditional consensus with the module off line? Who explicitly supports this. Chip and MM thumbs up and OMT and LCA and I think that’s consensus unless there is another interpretation. + +CDA: Unless anybody speaks up in opposition. + +DE: Any non-blocking opposition or points of concern? Points to think about? + +CDA: Not seeing anything. All right. Thank you Dan. + +### Summary + +* DE argued that: + * Most features should “layer” on top of existing features, and only some add new “capabilities” + * When a capability is added, it should be because that’s the actual point of the proposal, rather than just being an incidental choice + * When it comes to syntax features, DE asserts that bytecode interpreters are under similar constraints to transpilers. Both have faced expectations from people not involved directly in them that they could perform non-local optimizations reliably, but this is not the case. Instead, both benefit from simpler, locally analyzable/desugarable designs. + * Proposal: Most features should layer, and the ones that don’t should be adding a capability for a reason. +* (Summary of main discussion points) + +### Conclusion + +* Conditional consensus on a modified version of the statement: Rather than asserting that there should be many sugar/library features, the design principle statement should focus more on the negative: +* When new features include new capabilities, this should be for a particular reason. +* This design principle is not an entry on a checklist or requirement for stage advancement, but rather a reference point for future discussions, a permissible argument in committee. +* Delegates to collaborate on GitHub to finalize wording, including review from MM, SYG, KG, NRO to ensure that it resolves questions that they raised + +## Continuation: [Decision Making through Consensus - take 2] + +Presenter: Michael Saboff (MLS) + +RPR: I already clarified this with MM outside. But we already have precedent. In general, I agree with this point that we do operate with unanimity in the role and I think it’s also important that we have modified the process in the past to say that we can go forward even without unanimity in very narrow cases such as the clarification we made to what is acceptable when blocking Stage 4? + +MM: Let’s not just have the discussion where we come back to Michael’s topic. Yes. + +PFC: MM mentioned rules needing to fail safe, I think I would not say that the current situation is a rule that fails safe. I would say it is a rule that fails in an acceptable way to most of the people currently on the committee. That is one of the things I took away from MLS’s presentation. I think an example of that is this morning we talked about a number of examples of bad ideas where are somebody could have vetoed it but didn’t or was persuaded to yield. Those are valid examples and they stick out like a sore thumb in our memory because regret is a very strong emotion. But we don’t talk about the vetoed ideas that would have worked out great because that is just not something that we can know. You could call that failing safe, but I’m not sure I would agree with that. I just want to point out the negative ideas for which we feel regret are not the only examples of the process not working as we Intended. + +MM: I don’t know how to read the queue. This is just—this is not from CDA but restoration of the previous queue, right? + +MLS: Regulations made in the comment. + +CDA: So I’m putting the individuals three letter acronym or two letter for those grandfathered in or else it looks like me throughout the queue that is not accurate. Next we have—did we skip ahead? Phillip now. + +NRO: When we talk about requiring more than one person to block, we should consider whether more than one person means that two people from Bloomberg, for example, can get together and provide support to each other and whether we require it to be two different organizations so that companies that send one delegate are not at disadvantage compared to companies that send four or five. If you want this to be at least two organizations, then we should also consider cases like Igalia paid for Bloomberg working on proposals. Would they be able to decide together to block some things and Bloomberg is being asked to do so. This is not just as multiple companies here have financial relationships with other companies in the committee. So we need to be careful about this. + +MLS: I agree with you. I have had conversations about this. I agree that we probably if we require multiple people to block something, they should not be from the same organization but we do have financial relationships that are not clearly known at times. + +JSL: Agreeing. It does not work if they’re from the same company or from the same—or have that financial relationship. That’s going to have to be whatever new process we have here is that it has to come from somewhere. + +PFC: I also want to suggest that when we think about changing the rules we build into the norm that we revisit the rule and see if it needs to be changed after some number of years. I see the current rules around vetoing as appropriate to a different time in the committee’s history and I suppose it’s appropriate that we revisit it now. We may come up with a rule that is not suited for purpose in five years, so maybe we should revisit it in five years. I would love to have a mechanism by which it just shows up on the calendars that we need to consider that, rather than somebody having to spend a lot of emotional and social labour to bring it to committee every so often. + +DLM: I wanted to bring up the change we made a couple years ago and had local supporters for advancement than taking silence as consensus. I see this proposed change in alignment with that kind of makes sense to me that we should then possibly have two people vocally block consensus. I do think it should be two, though. I made the same point later in the queue, I don’t think the reason to make a 5% rule. I think 2 is probably sufficient. The other thing that is interesting is the point about financial relationships or delegates from the same company and if we’re going to require like a clear separation between two people to veto proposal then we should require the same rule for two supporters for it to advance. + +SYG: Along the point about having active support, I want to give a framing that I’ve been thinking about especially with what has happened with decorators and ShadowRealms and the contrast between TC39 and something like WHATWG is that our culture of vetoes and blocks I think for many reasons not the least of which is the social and the emotional cost of being kind of the lone blocker I think TC39 operates more on a can we live with—sorry, if you think of a spectrum between we can live with something versus we are really enthusiastically supporting something and think of that as a spectrum the stage advancement in TC39 can kind of run the whole gamut and not got to that and workshop and compromise enough so that everyone can live with it and there is no strong active interest especially from the browser vendor to do a thing. And that kind of thing can still get stage advancement in TC39 whereas that kind of thing has a much lower chance to get stage advancement or agreement in a body like WHATWG and it can happen if a proposal advances to a later stage but is actually in the—nobody is really that actively interested in it but we’re grudgingly think we can live with it and if that is the state that the proposal is in, I think that is bad. I would like to—and I think that is bad. I would like to try to fix that with better process. But from where I’m sitting, it seems like the blocking culture of TC39 directly plays into us getting reluctant stage advancement. So if I can have input here, I would like to nudge us towards having more stage advancement means active interest. + +EAO: Just what I noted there. I generally support the idea. But I think having two people support a veto is enough. The 5% part of the rule just seems way too complicated. And counting whether or not we happen to be over 40 people or whatever the other limits are require three seems unnecessarily complicated. + +MLS: I can live with that. + +JSC: There’s been talk about the social cost or the social pressure appears about being the first to announce they are blocking. This is not a formal thing. We have got TCQ here where a lot of people sign in using their identity. Just an idea is that we could have people report that would like to block a proposal and that it would show account of how many people are saying they would block. And if at least two people appear or however many meet the threshold for actual blocking, then their identities would become public on the part of like maybe the chair would review or something. So basically the idea is that your identity would not be revealed if you’re the only one who signals to TCQ that you would block. If you have another person who also signals that they would block, then you could reveal it together and perhaps that would mitigate this social cost or pressure of being the only blocker having—and being the first to announce that you would block. You will be able to see that someone else agrees with you on blocking. Basically the idea of perhaps we can leverage TCQ or something like that. In general for signaling, what SYG mentioned earlier on signaling general sentiments, I think we could make leverage TCQ to have people signal their general attitude or temperature towards proposals like I could live with this versus I would like to block this especially if someone else would also like to block this. I know that would get complicated with like delegates, like, organizations versus individual delegates or whatever, but that’s just an idea. Am I proposing this instead of the two-person block change?, asks LCA. I don’t think this is an instead. I think this would be we could use this if we have like the two people are needed to block, we could use TCQ to execute and to implement it by having—if there’s only person who would like to block, their identity would remain hidden but at least two people would like to block than TCQ would show it to everyone and the chair would reveal their identities to everyone because they’re blocking it. Does that make sense LCA? + +JSL: I would modify that just a bit. About the secret block vote, and I don’t think the step of revealing the blocker is necessary. Using a tool like TCQ to at least take the temperature check of the room on kind of where are we at on this makes perfect sense, but it can also remain completely anonymous. It’s just like, we’re not very enthusiastic about this or this is something that for sure we want to—also want to render a specific question. If we are going to do that, you have to be able to have specific question and what is the temperature on the question? + +JSC: We have temperature checks using TCQ with emoji from what I recall. Exactly like that. But I think it’s important to have it be shown with a specific question like would you block this? And have it remain completely anonymous. And positive, positive and following, and confused, instead of something like this, I would like it to be clearly “would you block this” and have it remain completely anonymous. I think that would be productive. Does that go with what you’re saying? And mitigate the reluctance of people to block, by giving information anonymously to people whether they would be the only one who would block it or not. + +CDA: We have several replies. I just want to note that we have like a little over eight minutes left for this entire topic. So if people could try and be brief so we can get to the following topics, that would be greatly appreciated. So PFC. + +PFC: I would be very wary of any sort of anonymous voting and would prefer that we only use that in situations where it’s absolutely necessary like voting on the chair group and maybe, say, personal safety reasons to keep your vote anonymous. I don’t think it should be done in the case of voting on proposals. My goal here is not to remove all of the social costs of blocking. I think you need to bear some of the costs and if you’re going to veto something, you have a responsibility to make clear why you’re vetoing it and do it in a convincing way. So it would not be my goal to remove that burden from somebody who wanted to block the proposal. + +CZW: I think what I’m saying is related to PFC that what we have been doing is blocking and we need to work out how to unblock the proposal from advancing not just adding a +1 to block and without leaving a path to work out how to unblock. + +JGT: Just sort of generally anonymous blocks tend to have a poor track record in political science in general. And so they don’t bring out the best behaviors in people. Often that social cost is helpful. So I would not—I’m sort of with PFC and definitely not want us to get into position where we enable it some anonymous person is holding things up. That doesn’t sound good. + +JSL: I think it’s important there to understand what we’re talking about with the anonymous temperature check is not an anonymous block. Blocking would still have to be explicit. They still have to raise their hand. I want to block this. And I think it’s important to speak to there is social cost can’t just be like I don’t want this to move forward. If the proposal has been adopted by the committee and says we think this is something we should work on, then whoever is blocking advancement of that does have a responsibility to figure out how to unblock it. It’s not just I don’t want this to advance, it is I will work with the champions and figure out a path forward that does work that does advance the work of the committee. Otherwise, if that doesn’t happen or if they get together and they work that out and they still can’t find a path, then it becomes a committee decision on does this advance or do we park this? It’s not one person blocking it at that point. It’s the committee deciding, yes, there’s not a path forward. So, you know, we can’t define what the social cost is and part of it is you have to work to advance it within reason. + +JSC: Just adding on to what JSL said. The concerns about actual anonymous blocking make sense. I’m just talking about having some sort of temperature check showing people, telling them if I were blocking, if I blocked, would someone else also block too and then you have to justify everything and whatever. If you know the answer yourself and can predict what the other is going to say…If you don’t want the proposal to advance, you don’t need this question or whatever. But it still could be useful, I guess, for other people who would want to. But if you already know you don’t want the proposal to advance, then you can just block it or whatever anyway. That’s all. + +SYG: This to respond narrowly that the blocker should be responsible for moving the proposal forward another way. That doesn’t make sense to me. The blocker has an obligation to explain and articulate a reason why they’re blocking, but sometimes the reason is they don’t believe that this problem is worth solving or something. That doesn’t make sense to me whoever blocks then takes responsibility to advance the feature. + +JSL: Might be that I misspoke. That’s not quite the approach. It’s the person that is blocking has a responsibility to try to find a path forward. A path forward does not mean necessarily advancing that proposal. It might mean you agree to disagree and this thing just needs to be parked and there’s no way to move it forward. + +SYG: I see, okay. + +MM: I want to recount a conversation that RPR and I had in the hallway that RPR mentioned a part of. Very much surprised me and moved my position towards MLS’s, which I did not expect. First I just want to mention the thing that came up here which is, well, it shouldn’t just be two people, it should be two people from two different—we should represent two different orgs and should be two different that don’t have financial relationships. That’s a perfect example of a slippery slope mechanism that RPR and I did discuss before we came to the interesting insight that let us in your direction. But I think that’s also worth recounting. Any rule can be gamed if the rule is just two people, if the rule is two people from two separate orgs I know how to game that. It would be harder but I would do it if I needed to for what I consider to be good faith reasons. I would simply not do any of this if I didn’t consider my reasons to be good faith. And then if it were two people from two separate orgs with no financial relationship, I know how to game that too. The problem is that every step of escalation of the rule to try to avoid some gaming problem causes the person who needs to block it for what they consider to be good faith effort to escalate their political manipulation to keep it blocked which creates bad feedings that causes the rules to be changed to escalate further and what you’re doing is every step of this is weakening social norms in the attempt to replace it with formal rules. A lot of why we’re working is because of the general good faith respecting of the social norms that we have written. We have written them in How We work. Many of the others are just sort of things that evolve in the air as a shared ethic and many we don’t know how to articulate, but we have good social norms and rules can start killing social norms by replacing it with what looks like politics that leaves a bad taste in people’s mouth. + +MM: Okay. Now, the two weakenings of my position that take us in your direction that RPR and I came up with, once something has reached Stage 3, Stage 3 is explicitly a signal to browser makers in particular but to everybody you can now invest heavily in this thing because it will only be stopped for very extreme reasons. So weakening single veto between stage 3 and 4, I’m open to considering it. Now, what the particular rule is for weakening, I don’t know. And that would have to be part of the discussion. I’m not agreeing to any of the particular rules that were mentioned here. But I’m hoping to the idea of something weaker of single veto between 3 and 4 because of the magnitude of investment and therefore the magnitude of the cost if it’s blocked from 4. That’s one. RPR, please after I—you know, correct me if I’m mischaracterizing anything from the conversation. + +MM: The other one is that rather than the objector having to get a second person to object, which I find unacceptable, instead what we came up with I think was very interesting. And I’m wondering in particular MLS your reaction to it that is the objector has to get another person on the committee to agree that their reasons for objecting are good faith. The other person might disagree – might support the proposal, might hate the fact that there’s an objector, but they agree that the objector is holding their objection in good faith. That’s an adequate block. If they can’t get one other person to agree that the reason is good faith, then we would have to word it carefully to not lead to politicking opportunities but I would be willing to say if you can’t get one other person to agree the objection is in good faith, maybe that is not an adequate situation for blocking. + +MLS: I considered the social norms we have in place already included that but explicitly specifying that I think is good. Going back to your Stage 3 to Stage 4, it sounds like—and I think we already have the social norm as you increase in stages it should be more difficult to do that. + +MM: But more difficult is right now just in terms of the norms, not in terms of the rules. And I’m willing to consider a strengthening of the rule against blocking. I don’t have a particular proposal that I’m prepared to agree to but I’m open to considering a rule change that would weaken the ability of a lone objector. + +MLS: Okay. + +MM: Am I getting it? + +RPR: I think we have a few ideas and definitely the ones that you were saying were part of that. I think perhaps the slight refined version of that that I chatted with CDA and actually it came from CDA was that it might be—this kind of needing to get a second supporter to block might be something that because we only want to employ this in emergency situations and don’t want to change the general nature of the conversation, it might be something that came out after like a cooling-off period. So we would do it at perhaps one meeting later. At that point, we would then seek someone else to speak up in favor of the block and the degree to which they speak up in favor of it whether it is in good faith or something else. We could perhaps iterate on that. + +MM: I’m unwilling to agree that the second person has to actually object. + +RPR: Sure, yes. + +MM: Even for a cooling off period. Except maybe during the 3 to 4 where I’m open to other suggestions. + +DE: The comment is about this. We already established a rule for 3 to 4 by precedent during the class fields discussion where we said you can’t object because you disagree about the design during 3 to 4. It has to really be for, you know, implementation-based reasons. We had somebody saying I object. Or actually a number of people saying that and then we said, no, this doesn’t make sense. And we proceeded. So the thing is with our current veto-based process, we end up on this path of needing to at great cost to us all invent these detailed legalistic explanations for why we can do things. If we have procedures that were in extreme cases based on super, super majorities with the extra pauses (?), I think we would be able to get past these things without nearly as much strife. These things cause actual problems for us. Any way, the particular case of 3 to 4, there’s really no action to take. We don’t – + +MM: I’m confused about the norm versus rule there. Somebody can say that they’re blocking for the reason that is considered to be legitimate and somebody who wants to block it can claim they’re blocking it for the enumerated reasons in the non-good faith manner. Is our current operating rules one in which a claim to block it for those reasons can be overruled? + +DE: So, you know, this is—like, I was saying before about it, people have different interpretations what is going on with respect to blocking and procedural things in committee. It is ambiguous. Previously in class fields people claim I’m blocking and then it happened. So whether this was the chairs making the determination that it was ambiguous or meaningless or that being an emergent property of the committee is ambiguous. Maybe more the latter. At least I think that’s what the chairs might have wanted at the time. I’m not really sure. But we end up working on getting through these issues through a huge amount of extra mental effort and extra kind of case by case decision making and everyone worried about overstepping and it’s a distraction from the language design work. + +MM: I don’t think it’s a distraction. I think that overruling, overcoming an objection should have a very, very high bar. + +DE: Yes, agreed. + +MM: And that the nature of the process we need to always be talking both about the rules and the norms and overreliance on rules can really be disruptive. + +DE: Yes. So I agree completely with what MLS stated at the beginning which is what we don’t have shared norms here. We have different people who have different practices in terms of what they feel is appropriate for blocks. And this gives disproportionate weight to people. We have to really make sure that we can be open about all of the different concerns that everyone has and not overemphasize the concerns of people who feel more like blocking. + +MM: The other aspect that I think is very much worth being explicit about is that each browser maker de facto has a unitary veto if the browser maker says we want to implement something, it doesn’t mat warm air the committee does. And in general on the theme of rules it’s hard to understand what makes TC39 part of its character that’s something I love is that TC39 itself has no enforcement power, the coupling between TC39 decisions and what anybody else outside of this room actually does is only through norms. + +DE: This conversation is about getting those shared level playing fields communication and consensus determining practices. That’s what we—I think we agree on this. This very kind of philosophical point. + +MLS: I need to get going. Please continue and put this in the notes. + +LCA: I want to second MM’s comment on rules and gaming of rules. I hundred percent agree with you, if we come up with stricter and stricter rules about what is a valid detail, we will just whoever wants to veto will want to game the rules. Ultimately this will always end up being a case by case decision that the committee has to take just like it is right now. We may decide to do something about a veto and ignore it or we may decide to agree with the veto and not to ignore it. + +RPR: I’m saying thank you so much for presenting this topic and all your efforts. + +MLS: I enjoy the rich conversation that’s resulting from this. + +LCA: I don’t think it makes sense for us to continue to escalate in any way because if we do that now, we have to do it again the next time somebody blocks it. It won’t change the ultimate thing that – the situation we’re currently in where every time somebody blocks and maybe a majority of the committee does not agree with the block, it ends up being a case by case decision that ends up with some people being upset. + +REK: I wanted to make a comment regarding the notion of financial relationships between member orgs because it seems to me the spirit of the comment is about disclosing conflicts of interest should we adopt this role to have at least two blockers or whatever the number is. I would caution against trying to define like a particular notion of an outside relationship or a conflict of interest because it seems like that would put the blockers in a position of potentially improving the non-existence of a financial relationship and it also raises a lot of questions about what is a meaningful financial relationship or conflict of interest because for some of the organizations that belong to this committee, you can imagine that there are some trivial contracts or financial relationships that exist between them that committee members aren’t even aware of. So I would just like to generally caution against the notion of encoding specific language like financial relationships should we choose to adopt this process. + +JLS: Just two points. One I just want to clarify because I said it a few times and others. I don’t think anyone suggested that seconding a block, someone blocking and someone else saying, okay, I support this block implies that that second person also wants to block. And speak to your point, mark, you’re absolutely spot on. It might just be that, yeah, I might disagree but I see where you’re coming from, yes, we can go through this more. + +MM: The standard of somebody else agreeing that the blockage is in good faith is consistent with that same person voting to advance. So once again, denotation and connotation. Phrasing it as “I second the block” gets the connotations wrong. + +JLS: Yes. One other comment with what LCA was saying and other comments adding new rules here should be the last resort. Adding new policies should be a last resort. Coming to find a policy to make this better, yes we can. Everyone will hate it but yes we can. It should not be something that we reach for now. If we can come up with a better social norm that we all agree to, that’s by far the better approach than devising new policy. I’m happy to help on devising new policy if we need. + +JHD: I put this on the queue when MM was speaking. I think three to four is not where people are concerned with and I think that somebody gaming the rules in bad faith is something that we’re all concerned with. The social cost of objecting in good faith is incredibly high. The only reason this entire room doesn’t hate me because they all understand I’m arguing in good faith and willing to discuss and so on and so forth because I have been an objector many times and I have found it seems like the whole general state of affairs holds true with other objectors who I am present for side discussion about is that it’s generally understood and appreciated when frustrating lone objectors are still doing it from a good place. And that matters. That mitigates the social cost a lot. Therefore, someone arguing in bad faith, it will not take very long before that is transparent and the social cost of doing so becomes very, very high. And so far, I have not seen any nefarious throwing of bodies at—you know, bringing new people in to burn all the bridges until they get their way. That’s the only failure mode I can think of of what I’m describing. So I think assumption of good faith and as JLS and others have alluded to, making sure that if you’re an objector whether you’re alone or not, making sure you’re accessible and available for discussion of paths forward I think that mitigates a lot of the social cost. It’s still not going to be conducive to every personality type. I feel like most of the time this is a friction—has friction and frustrating but nonetheless a functional process. + +CDA: We have +1 agreeing with REK’s comments from LCA and OMT. CM is next. + +CM: I am very sympathetic to the concerns that MLS articulated. But I am also observing we have been, I think, reasonably successful with the current process for going on a couple of decades now. And I am nervous about the consequences of making major changes in that process being disruptive and destructive in ways that we cannot foresee. The discussion here—there’s been a lot of ideas that people have put forward, which I think were well intentioned, but feel like a lot of rules lawyering, to capture nuances by making the rules more precise or making the rules more detailed. I think these notions are sort of missing the point. I think we might benefit from clarifying the norms, in the documents about how we work and all of that being much more explicit or expansive about what the norms are, possibly evolving or articulating the norms in more detail, to address some of the issues which I think have legitimately been raised. But I am very nervous about making a change that turns the whole process on its head + +AKR: Yeah. Yeah. I also had the feelings that CM mentioned. + +PFC: The same thing I said before about rules that fail safe, and how that’s distinct from rules that fail in ways that are acceptable to people in this room. I think saying that we have a reasonable track record of success may be true, but it is also subject to survivorship bias. We have a reasonable track record in the things that matter to the people in the room. That’s a fair point. But I think we have not seen examples of things we know would have been good if they had gone forward, that were stopped by somebody digging in heels at the right moment. The thing is, the loss that is prevented and the gains that are missed are all ultimately hypothetical and speculative. But it’s often easier to foresee short-term harm than it is to foresee long-term benefit. And I think that gives some validity to the survivorship bias argument. + +JHD: I had a comment there. I mean, opportunity loss is less bad, but ship badness. Another way to rephrase it would be, of course, I immediately forgot. Let’s say you have an idea that you—that was shut down because of problems that this proposal hopes to resolve, bring it back. If you are enough energy to do, if it’s a good idea, convince someone else. The only things I know of where some—I know something was a good idea and it seems permanently killed is when the people who had the energy and time to bring it back stopped doing so. + +PFC: Yeah. + +JHD: That is a failure mode, but, like, it’s—you can bring it back. + +PFC: That’s exactly what I mean when I say that success is in the eyes of what is acceptable to the people in this room. + +JHD: Okay. + +PFC: Because MLS mentioned, people have run into roadblocks and then left. Those ideas aren’t coming back but we don’t know about them. + +CDA: Sorry. We have SYG next. + +SYG: Yes. Similar point. I think Jordan what you say kind of reveals that you prefer a certain disposition of person to be participating here. Like, I don’t think it is—this is something that MLS called out explicitly in being welcoming to new contributors and that kind of thing. Like the failure mode ought not to be super human persistence. That doesn’t seem like a good thing to expect of people. + +DE: Yeah. Just agreeing with SYG. There are real serious opportunity costs. We do lose proposals because we—family are made to not feel welcome. This is kind of core of diversity and inclusion work that we have talked about many times in committee that I think deserves continued emphasis. People can be so people have limits. We encourage good work to be done. + +CDA: All right. That is it for the queue. + +RPR: Yeah. I would be happy to say summary notes, what we heard. I am not going to capture everything, but I think we have generally agreed there are problems to be solved here. Really appreciate all the different suggestions that have come in. We recognize this is a very delicate matter. And we really want to make sure that any suggestion, any proposal here, any cure is not worse than the disease. And this is something that the chairs have spent time digging into, thinking about the past and we are open to ideas. Not just here in plenary, but outside. And we are very happy to work with people who have energy and ideas for taking this sport. We are appreciative of the discussions we had here today that show some signs of light at the end of the tunnel. Does anyone else want to provide any summary statements that we have heard? + +CDA: I guess I would add that I think we have heard, you know, of course this started with MLS’s slide deck and statements of the perceived problems. But I think we have uncovered kind of a broader category of problems. And potentially with a broader number of solutions. So I think we are all interested in things that improve our processes. So looking forward to continuous improvement. + +CDA: All right. With that, that brings us, I think, semi—that’s through our scheduled topics anyway. How are we doing, DE on the breakout topics? + +DE: We have 15 breakout topics proposed. I would encourage you to go to the breakout topics task. There’s a link to the Google form. Where you can vote for which breakout topics you are interested in. I think we can leave a couple of minutes and have a short break maybe for that. For voting. And then maybe we have time for 2 sessions. Or maybe it should be one session. Given there’s only an hour and a half left + +RPR: So thank you, everyone, for participating in this week. I think it’s been a meeting to remember. Thank our hosts: Thank you, Michael and Kevin for arranging an excellent venue; it was a superb social, as well on the Tuesday night. These things take a lot of energy to organize and so thank you to F5. diff --git a/meetings/2025-04/april-14.md b/meetings/2025-04/april-14.md new file mode 100644 index 00000000..c37dee15 --- /dev/null +++ b/meetings/2025-04/april-14.md @@ -0,0 +1,1070 @@ +# 107th TC39 Meeting + +Day One—14 April 2025 + +## Attendees + +| Name | Abbreviation | Organization | +|:-----------------------|:-------------|:-------------------| +| Waldemar Horwat | WH | Invited Expert | +| Daniel Ehrenberg | DE | Bloomberg | +| Ashley Claymore | ACE | Bloomberg | +| Jonathan Kuperman | JKP | Bloomberg | +| Ben Lickly | BLY | Google | +| Bradford C. Smith | BSH | Google | +| Chris de Almeida | CDA | IBM | +| Daniel Minor | DLM | Mozilla | +| Jesse Alama | JMN | Igalia | +| Chip Morningstar | CM | Consensys | +| Michael Saboff | MLS | Apple | +| Nicolò Ribaudo | NRO | Igalia | +| Erik Marks | REK | Consensys | +| Richard Gibson | RGN | Agoric | +| Josh Goldberg | JKG | Invited Expert | +| Luca Forstner | LFR | Sentry | +| Philip Chimento | PFC | Igalia | +| Christian Ulbrich | CHU | Zalari | +| Mikhail Barash | MBH | Univ. of Bergen | +| Eemeli Aro | EAO | Mozilla | +| Chengzhong Wu | CZW | Bloomberg | +| Dmitry Makhnev | DJM | JetBrains | +| J. S. Choi | JSC | Invited Expert | +| Keith Miller | KM | Apple Inc | +| Aki Rose Braun | AKI | Ecma International | +| Luca Casonato | LCA | Deno Land Inc | +| Samina Husain | SHN | Ecma International | +| Istvan Sebestyen | IS | Ecma International | +| Duncan MacGregor | DMM | ServiceNow Inc | +| Mathieu Hofman | MAH | Agoric | +| Mark Miller | MM | Agoric | +| Ron Buckton | RBN | Microsoft | +| Andreas Woess | AWO | Oracle | +| Romulo Cintra | RCA | Igalia | +| Andreu Botella | ABO | Igalia | +| Ruben Bridgewater | | Invited Expert | +| Michael Ficarra | MF | F5 | +| Ulises Gascon | UGN | Open JS | +| Kevin Gibbons | KG | F5 | +| Shu-yu Guo | SYG | Google | +| Jordan Harband | JHD | HeroDevs | +| John Hax | JHX | Invited Expert | +| Stephen Hicks | | Google | +| Tom Kopp | TKP | Zalari GmbH | +| Veniamin Krol | | JetBrains | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Luis Pardo | LFP | Microsoft | +| Justin Ridgewell | JRL | Google | +| Ujjwal Sharma | USA | Igalia | +| James Snell | JSL | Cloudflare | +| Jack Works | JWK | Sujitech | + +## Opening & Welcome + +Presenter: Ujjwal Sharma (USA) + +USA: Perfect, great. Thank you. Then I will start with this and property folks as we go. Hello and welcome to the 107th meeting. It’s the 14th of April. This is a fully remote meeting in the New York timezone. I’d like to introduce to you all to your chairs group you might remember from the last meeting, or if you missed the last meeting, here is some news for you. There’s me, RPR, and CDA, and the facilitators JRL, DLM and DRR. On behalf of all of us, I’d like to welcome you all and kick off this meeting. Make sure you’re signed in. If you’re here, I’m assuming you’re already signed in. It’s perfectly fine if you’re here already, please just go back and sign in. The responses to this forum are really helpful for us to track attendance. The TC39, as you mow, has a code of conduct, and please be mindful and follow it at all times. It applies to this meeting. Since it’s online, there’s not many mediums and chat rooms are governed by a code of conduct. Daily schedule is pretty straightforward for these daily meetings. We start now, which in this case is 10, in New York time, and we finish in go hours, and we have a two-hour session until the break, and then an hour break and another two hours until three in New York time. + +USA: A quick rundown of our comms tools before we begin. There’s TCQ, which is by far one of the most unique and important tools that we use for communicating. You should have the link to the TCQ already. And as you can see, there’s the entire agenda there. This is how any individual agenda item looks. There’s a queue and a bunch of things. This side looks for a participant. If you’re a speaker, in is how is view looks for you. I’ll quickly discuss these different options you have. They go from right to left in order of sort of reducing priority, so point of order is the highest priority, that’s why it’s red color. But the important part here is that please use it sparingly, please use it for emergencies such as if the notes we don’t seem to update for you or if there’s some serious technical glitches or if in general, you believe that there’s something urgent enough that the meeting should halt for that to be resolved. Next you have the clarifying questions. These jump to the top of the queue apart from points of orderings, obviously, and in the case, you are basically interrupting the running tune of discussions to ask a clarifying question regarding the current point that’s being discussed. Next you have the discuss current topic, where you add another sort of item for discussing the current topic, so, you know, if there’s any topic that’s going on, you can add another point to that sort of list, so it doesn’t go to the bottom, but it goes to the end of this particular discussion, and then you can introduce a new topic, which puts you at the bottom of the list, so you can start a new topic after the most recent one has been finished. So that’s all for adding yourself on the queue. There’s another button that is only visible if you’re already speaking which says I’m done speaking. Please do not use it in this moment because the problem with this button at the moment is that sometimes we can double click it, so, for instance, if the Chairs are running the queue and you also press this button, you might just skip the next person to you. So because of TCQ’s technical glitches at this moment, we do not recommend using this button. That’s all for TCQ. We also have Matrix. You might enjoy any of these channels. Now, of course, delegates is supposed to be for the most technical discussions, Temporal, quite the opposite. So all these channels are different and have their own sort of vibe, but overall, there’s a group these channels that are dedicated to specific subjects and you might want to be on them. So join the TC39 space on Matrix, and ask us for joining details if you don’t have them. Next is the IPR policy. Basically, this is a quick reminder of ECMA’s IPR policy. Everybody who is a part of this meeting at this moment is supposed to be either a delegate from an ECMA member, or, which which case your organization collectively signed the—and approved the ECMA IPR policy and you’re an invited expert, in case of that, you have done it yourself. If you have not, please contact us or, you know, be aware that your contributions in this meeting are going to be used as part of this royalty-free teaching. So, yeah, I’m not a lawyer myself, but make sure that you have reviewed this and, yeah, observers, on the other hand, by not contributing anything to the meeting themselves, in terms of obviously, you know, spoken contributions are not subject to this. Notes are live. I believe we are being transcribed right now. And remember to summarize key points at the end of each topic. For instance, if you have a presentation and you think you have a pretty good idea what the conclusion is going to be or the summary going to be, feel free to include it in the presentation itself or take that few minutes at the end of to your presentation to go over a quick summary. Actually, I’m suppose odd the read this outtrade, so a detailed transcript of the meeting is being prepared and eventually post in GitHub. You may edit this at any time during the meeting in Google docs for accuracy, including deleted comments which you do not wish to appear. You may request corrections or deletions after the fact by editing the Google Doc in the first two weeks after the TC39 meeting or subsequently making a PR in the notes suppository or contacting the TC39 chairs. The next meeting, the 108th meeting is from the 28th to 30 of May in A Coruña, hosted by Igalia and in central European summertime. Yay for that. And let’s move on to the rest of the agenda. + +USA: So first of all, let’s ask for note takers. Any volunteers? Let me switch. + +JMN: I can help out. This is Jesse alma. + +USA: Thank you, Jesse. Anyone else would like to help out with the notes? The very first slot of the day. And if I may, this is probably one of the easiest ones really given how relaxed the topics seem to be, as opposed to later parts of the meeting where things can get quite complicated. + +ACE: I’ll take an easy slot. + +USA: Thank you, Ashley. So, yeah, let’s—noted down, yeah, perfect. And move on. Okay, so let’s approve the previous minutes. I’ll give a minute for—well, a few seconds for anyone to mention any thoughts on the previous minutes. a reminder that you can always edit them in the notes repo if you’d like. Anyone? + +CDA: Yeah, so the minutes are still not published. There’s a PR out, but the—there’s still a bunch of open, unresolved suggestions. We should direct those folks to just submit, like, just make those commits directly, because like this commonly happens where somebody’s waiting for, I guess, the PR author to approve the suggestions, but they should just feel free to make them, but we should make a point to get this done as soon as possible. + +USA: Right. Yeah. Thank you, Chris. I guess in this case, then the previous minutes are part of the PR. We should merge it soon, but since it’s still not merged, you have a great moment to sort of go through it, approve it, if you’d like, or just post any corrections. All right, then let’s say with the previous minutes—that the previous minutes have been approved. Let’s make sure that we merge them in soon. Next let’s adopt the current agenda. So I’ll give a few seconds for folks to raise any concerns about the current agenda. Sounds like consensus, so we have adopted the current agenda. Next we have the secretary’s report. Hello, Samina. + +## Secretary's Report + +Presenter: Samina Husain (SHN) + +SHN: Thank you for the start of the meeting, and welcome to everybody. I have a relatively short slide since our last meeting with activities that have taken place. The opt out is open for the ECMA262 and 402 for the officials for the GA, and I’d like to give you a bit of an update on some new discussions we’re having with some new topics and work for ECMA. ECMA has a code of conduct and you can review the invited experts rules. And some of the documents that have been recently published, if you want access to those documents, you just have to ask your chair. Dates for the next Meetings are also noted, Ujjwal already mentioned the very next meeting for the plenary is going to be in May, the next important date for us to be aware of is the June GA, which is the 25th of—June coming up this year. + +SHN: All right, so as I mentioned, so very important for the June meeting that’s coming up, we have the opt out period open for the 60 day as per always. It does tend to run very smoothly and I anticipate the same, and there are two additional approvals for both, 262 and the 402, so the 16th edition and the 12th edition. I think they’ve already been frozen for some time, so thank you very much for all of that work and we will proceed to the approval in June. + +SHN: The new work that’s going on, there has already been some—a good discussion on forming a new TC, TC57 There’s a question amongst the discussion in the execom. I think we are moving forward well. We are on the second cycle of discussion, it will be excellent to have a new TC in the work items of ECMA. + +SHN: Just some other items, just as a reminder, there have been a number of invited experts that have recently joined TC39, not to mention other TCs, as per always, I will review them in the third quarter this year. Many of the new TCs are part of organizations and I look forward to seeing those organizations ready to make decisions to join or how they want to assess their participation and activities with ECMA. I was reminded by W3C about the horizontal review. I’ve left a note that this is still an open discussion, so as TC39 deems fit, we would then come back to them on how to better be involved in the horizontal review. + +SHN: I’m going to pause there, because that is the extent of the report based on what we discussed from our last meeting, which was just six weeks ago, and I’ll stop here to ask if there’s anything I missed that you would have expected me to present or you have questions on what I have presented. + +DE: Once there is input from the committee, the new TC will give that feedback back to the open-source community so that they can digest it, make a new proposal and everyone can agree on a common standard. I think this could be a really useful tool to unifying the whole community ecosystem. And I would encourage everyone here who is interested to participate. Please get in touch with me if you’re interested or if you have feedback on this idea. + +AKI: I don’t think I have any specific comments. I have been asked about sort of our process in collecting information from participants, how we utilize forms and have that data. And that’s something I’m working on and will have something for new the future, but it’s not anything slide worthy at the moment. + +SHN: Thank you, AKI. I want to recognize and thank AKI for her work on looking future on some tools, so we understand some of the requirements we’ve had. We’ve just had a meeting on it. So please just be a little patient. We’ll come back with you with some proposals on how we’re going to help improve that, and Aki is going to be involved in that, and also thanking Aki in advance for the PDF versions of the documents once they are approved in June. Thank you. Ujjwal, thank you very much. + +AKI: Thanks for to the 262 editors, by the way, for their help in what—in the direction we’re going to go for the PDF. They’ve put a lot of work in as well. Thank you. + +SHN: Yes, thank you very much. Ujjwal, thank you. That’s the extent of my presentation. I will be online if there are any further questions. + +### Speaker's Summary of Key Points + +A brief overview of current activities and upcoming milestones was presented: + +* The opt-out period for ECMA-262 (16th Edition) and ECMA-402 (12th Edition), which are both scheduled for final approval at the June General Assembly. +* An update was shared regarding the progress of new technical work, specifically the ongoing discussions around the formation of a new TC. There is positive momentum within the ExeCom and highlighted that this initiative represents a promising addition to Ecma’s future work program. +* Reminders were given about Ecma’s Code of Conduct, access to recently published documents, and upcoming meetings, including the next plenary in May and the June GA. Also mentioned a number of newly added invited experts across various TCs, with a formal review of all IE status scheduled for Q3. +* AKI reported on ongoing work related to information collection for tools and confirmed upcoming contributions related to PDF document formatting. +* AKI and the ECMA-262 editors were thanked for their continued support and collaboration. + +## TC39 Editors’ Update + +Presenter: Kevin Gibbons (KB) + +KG: There have been a fair handful of normative changes, partly because we are in the process of cutting ES2025 and we wanted to make sure we got as many of the outstanding things as we could. So I’ll run through all of these very briefly just so everyone is aware. This first one is is a fairly technical change. It makes it so there’s not a distinction between variables declared with var inside `eval` vs declarations without a `var` so engines don’t have the work to keep track of whether something is a var declaration or not, which is just useless work. The second thing was an oversight where when you `for await`over a synchronous iterator and the synchronous iterator is yielding promises, if the synchronous iterator yields rejected promises, then the for-await treats that as an exception, and when iterators have exceptions, you don’t close them. But this isn’t exception from the point of view of the synchronous iterator. It’s only an exception from point of view of the async consumer. So the synchronous iterator should be closed in this case. We had consensus for this literally years ago and were waiting to merge it until there were implementations, which the implementation landed Safari a few months ago, which is why that landed. I’ll have more to say about that later. + +KG: We added `RegExp.escape`. We made another iterator closing tweak where if you pass an invalid value to an iterator helper, that should close the underlying iterator. And we added Float16Array. And then #3559, this was a bugfix—in the process of updating the spec towards merging iterator helpers, we tweaked some of the machinery. In the process, we made an accidental change, an accidental normative change to array and RegExp string iterators where they became observably not reentrant, which was not our intention and not what engines implemented. So ABL I believe, opened a PR to rewrite this so we restore the original behavior. I did want to mention this is a bug fix, and sometimes we backport bug fixes to—when we very recently cut a new edition of the spec that’s still in the IPR opt out. The editors intend to do this unless there’s some particular reason not to. I don’t believe this should affect IPR opt out, especially because the behavior that we are restoring was in fact already part of the specification as of a couple of years ago. So this was strictly a bug fix, but it is technically a normative change. So I just want to give a heads up that will be one errata normative change to ES2025. + +KG: Okay. So that’s all the normative changes. There’s been a handful of editorial changes I want to call out that we know how dark mode, thanks to, again, Arai. So you’ll see that if you have your browser setting to preferred dark mode. + +KG: And then #3353, I want to call out only because it’s a tweak to the module machinery, the async module machinery, which is extremely complicated stuff. If you work with that, I recommend taking a look at this change, all though it’s a fairly small change I expect you’ll consider it an improvement. If you don’t work with the machinery, you don’t care about this at all. And finally, as Aki already mentioned there's been a bunch of changes towards making the printable document less crap. So it’s looking much nicer now. Thank you AKI and also MF for work on that. + +KG: We have a a fairly similar list of the upcoming work, although I wanted to call out we’ve actually gone through, well, mostly MF has gone through and documented the editorial conventions that we follow. It’s currently just a wiki page and there’s a link here if you’re interested in that. This is things like particular phrasing we use or decisions that we make when editing the document that can’t be captured by Ecmarkup. We try to codify as many as we can in code, but of course that’s not practical to do everything. And the last thing of course is just to call out that ES2025 has been cut. Apart from that up with minor tweak I mentioned, the link is on the reflector and there are the IPR has begun towards the GA in June or whatever it is. If you or any of your lawyers have any objections, speak now or forever hold your peace. And that’s all I got. Thanks so much. + +## ECMA-402 Editors’ Update + +Presenter: Ujjwal Sharma (USA) + +USA: Anyway, all right, I’ll be very quick. Hello everyone again. This is a brief update are the ECMA 402 editors. As KG mentioned earlier for 262, the new edition is out or, well, it is an opt out. Please check it out. And let us know as soon as possible if you have any concerns regarding, this otherwise it’s good from our end. We have done a bunch of editorial improvements, this is edition that includes duration format. + +USA: But here are three big editorial improvements (972, 983, 984), and one is it restructures the unit and sort of style and display landing, and format, instead of having multiple slot for style and display, we have one slot for each unit for options that correspond to that. So that’s a record that contains what the style and the display a bit more structured than it used to be, basically. Then we have cleaned up number format a bit. Some of this is still being sort of discussed. So if you’re interested, please check out that PR, but most of that sort of editorial improvements have been merged and then we have aabstracted away the constructors, the locale resolution part of the constructors into a single AO. So all around, there’s a few different editorial improvements. It should be a lot easier now to make sense of the spec, and, yeah, that’s it for 402. So thanks. + +## ECMA 404 + +Presenter: Chip Morningstar (CM) + +* no slides + +CM: Yeah, ECMA 404. Well, I looked. It’s still there. USA: That’s as good as it could be, right? CM: Yes, it’s excellent. + +## Test262 Status Update + +Presenter: Philip Chimento (PFC) + +PFC: We’ve continued to have many nice smaller contributions from many people. We’ve been chipping away at the large pull request for tests for the Explicit Resource Management proposal, with many thanks to a contributor from Firefox as well. And I think that’s all that there is to report this time. + +## TG3 Status Update + +Presenter: Chris de Almeida (CDA) + +CDA: Yes, TG3 continues to meet to discuss security impacts of proposals in various stages. Please join us if you are interested. + +## TG4 Status Update + +Presenter: Johnathan Kuperman (JKP) + +JKP: This is a pretty quick update, and just a reminder, the working mode that we’ve been using is seeking out annual approval on things, so we’ve been meeting frequently in the meantime working on our newer features as well as normative changes. Mostly between the previous plenary and today we’ve been working on editorial updates. + +JKP: The big one is we converted the TG4 source map from bikeshed to ecmarkup and we’ve added formatting and linting for it as well as improving the experience for dark mode users. + +We’ve made a few normative updates. There was a typo in the decoding algorithm. A reminder these slides and the links are linked in the agenda. We had a typo in the VLQ decoding algorithm and another issue where the continuation bit for the code decoding VLQs. We also moved our algorithm examplesto the ECMA “syntax-directed operations” grammar. + +As far as our proposals, we’ve been just continuing to work on range mappings and scopes. For range mapping, we have a few small changes like a allowing multi-line mapping and for scopes, we have more work and we’ve got larger PRs discussing how to futureproof scope encoding and decoding as well as where to use relative and use absolute indices. + +## TG5: Experiments in Programming Language Standardization + +Presenter: Mikhail Barash (MBH) + +* [slides](https://docs.google.com/presentation/d/1We23iI6oOg5jViJZOB4EtUWexoQTvKlGDR7csxSxsT4/edit?usp=sharing) + +MBH: We had a very successful workshop at the plenary in Seattle. We had 21 attendees, I think from 11 different organizations. We continued to hold monthly meetings. And we are currently arranging two TG5 workshops. The one which is confirmed as of now is in A Coruña the day before the plenary starts, so the 27th of May will be hosted at the University of A Coruña and they have prepared some presentations for us, and I will also post later in the refactor and in the Matrix channel a call for presentations from the delegates if you want to give some presentation at that workshop. And we are currently planning a TG5 workshop in Tokyo for the November meeting. + +MBH: One more thing, the outreach: there will be a workshop on programming languages and organization and specification, which will be co-located with the European conference on object-oriented programming which will be held in July in Bergen, and the keynote will be on WebAssembly spec tech, the mechanized approach for the web assembly specification, and I would like to bring your attention to this. We encourage you to submit proposals for talks. It’s a 300-word abstract, and the links will be shared in the reflector and also in the Matrix channel. So please consider submitting. That’s + +## Updates from the CoC + +Presenter: Chris de Almeida (CDA) + +CDA: There are no updates from the CoC committee. There is nothing new to report. As always, remind folks that we are always welcoming new members to the CoC committee, so if that’s something you’re interested in, please reach out. + +## Normative: add notation to PluralRules + +Presenter: Ujjwal Sharma (USA) + +* [proposal](https://github.com/tc39/ecma402/pull/989) +* [notes](https://notes.igalia.com/p/UpmK0K8eo) + +USA: This is my presentation about small normative pull requests that we made on ECMA 402. I’d like so to quickly introduce it and by be end for presentation, hopefully you’ll have enough background and sort of confidence about this that you would agree to putting this into ECMA 402. So the title says notation support for `PluralRules`. What does that mean? + +USA: Okay, so here was the problem, right? `Intl.PluralRules` , if, you know, going for the initiated, this is a constructor on the Intl object that is slightly different from all of the existing constructors. While it there’s a bunch of these formatters, `DateTimeFormat`, `NumberFormat`, you know, we add formatters, we love formatters… this is actually an API that does selection, so it’s a bit more of an interesting building block. What it does is it exposes the locale specific pluralization rules to the user, so you could input a number and ask, you know, for any given locale, what the plural category is going to be for this. Now, for English speakers, this doesn’t sound super impressive given there’s only two. Languages like Spanish, for instance, have three, there’s a separate category for bigger numbers, for example; but there’s a lot more complex languages that can have up to five or six plural categories so it can be quite an involved process to build an application that takes all of these into account and in a way that works across locales. That’s what plurals does. + +USA: The problem is it doesn’t take the notation into account. Why are notations important? I give a quick history lesson on this. Notations weren’t originally in NumberFormat, but they were kind of one of the more frequently requested topics, so in May 2018, and I know that these kind of timings can be complicated, but I say May 2018 because of this issue, shutout to SFC by the way, for the heavy lifting. Spanish has a third category for “millones”, and every time you are in the millions, there’s a different plural category. + +USA: Fun fact, but, yeah, so in May 2018, unified NumberFormat added this notation support to NumberFormat. This means that NumberFormat can now format numbers in scientific notation or other sort of compact notation. This was nifty, right, and pretty much right away, or let’s say in two years, but, yeah, we wanted to support them in PluralRules too. It looks like it’s long time hases a pasted because unified NumberFormat took a while to happen, but as you can see, this unified NumberFormat was still not merged. The idea was once unified NumberFormat was merged and it would have notations for it, we would simultaneously start supporting number notations in groups. It somehow slipped through the cracks however and it doesn’t happen. But the idea was, you know, something as simple as this could be accepted, and given that notation was, you know, something that was already being supplied to number format, a similar options bag could be used for both. + +USA: So, yeah, not only should PluralRules support notations, but it should probably stick to the same options that NumberFormat does. And thinking of a solution, sort of more recently, I thought, well, if we have a notation slot on the PluralRules object, then we can just pass it to ResolvePlural, and given that this operation is not really specified, I mean, it’s implementation-defined, so to say, the final result is that, you know, we just need to start storing this information and passing it into the AO and that would pretty much be it. + +USA: Now, while the, you know, I call it a minimal solution, the PR, it is quite minimal by normative PR standards as well, which is why I don’t think that it deserves to be a proposal by any shot, but condensing it even further, you know, removing, for example, the part where is I add the new slot, I put it in the constructor, I put it in the list of slots, this is the change, right? Like, in the spec, you would perform said NumberFormat options in AO with these options in standard, and standard here is a notation. So this is an AO that is being shared between number format and plural rules. Now what we’re doing is we are getting the notation from the object—or from the options object, sorry, we are setting the internal slot that I talked about earlier, and then we call this. So we perform NumberFormat options with the notation instead of it being standard. Standard here being the standard notation as well. So, you know, the default is still standard. + +USA: There’s a few options here that I clicked out for readability, but, you know, the standard engineering, scientific compact, all of these options are available for notation. And in April 2025, which is, you know, slightly less than two weeks ago, we got approval from TG2. So here I am. I hope that this was, you know, informative enough and that you all feel confident. But I would like to ask now for consensus. + +DLM: Yeah, we support this normative change. + +DE: In change sounds good to me. I think we should treat this similar to staged proposals in terms of merging it once we have multiple implementations and test. We could track PRs like this. Anyway, this seems like a very good change to me. + +USA: Just FYI, we have tracking for everything, basically, sorry, for all normative PRs for ECMA 402, but noted. [tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs](https://github.com/tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs) + +DE: Okay, great. + +CDA: Awesome. That’s it for the queue, so it sounds like you have support. Are there any other voices of support for this normative change? + +USA: Awesome. I thank you. And I have a proposed conclusion for the notes, so the conclusion, normative pull request ECMA 402 was presented to the committee for consensus and this PR added support for a notation option in the plural rules constructor for handling different non-standard notations. + +DE: Do you want to say the part about how we had consensus? + +USA: And yeah, and with I guess a couple of supporting opinions, we achieved consensus for this pull request. + +### Speaker's Summary of Key Points + +Normative pull request [tc39/ecma402#989](https://github.com/tc39/ecma402/pull/989) on ECMA 402 was presented to the committee for consensus and this PR added support for a notation option in the plural rules constructor for handling different non-standard notations. + +### Conclusion + +The committee reached consensus on [the pull request](https://github.com/tc39/ecma402/pull/989), with explicit support from DE and DLM. + +## Normative: Mark sync module evaluation promise as handled + +Presenter: Nicolò Ribaudo (NRO) + +* [proposal](https://github.com/tc39/ecma262/pull/3535) +* [slides](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU) + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_1#slide=id.g34836646ca1_0_1) + +NRO: I’m presenting a pull request, fixing a bug around module promise rejection handling. Just a little bit of background, how does Promise rejection track work? And what is the problem? Rejection tracking is basically the machinery that lets us fire some sort of event when you have a promise that gets rejected, and then it gets handled. For example, browsers do this through an unhandledRejection event. So how does this work in detail? + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_8#slide=id.g34836646ca1_0_8) + +NRO: Well, whenever you reject a Promise, either through, like, calling the reject function from the constructor or using `Promise.reject`, and also for promise created internally by the spec and rejected: if, when the promise gets rejected, it’s not handled yet, so if it does not have a callback registered through .then or .catch, then we call HostPromiseRejectionTracker. + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_16#slide=id.g34836646ca1_0_16) + +NRO: And then later when you actually handle the promise, so when you call .then or .catch, it will tell the host, “now this promise has been handled”, and the host can tell the event is going to fire or do whatever they want to mark which promises are not being properly handled. + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_25#slide=id.g34836646ca1_0_25) + +NRO: So that was Promises, and how does this interact with modules? + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_44#slide=id.g34836646ca1_0_44) + +There are multiple types of modules in this spec, or well Module Records, which represent modules. There are a Module Record base class and two main types of actual Module Records. There are Cyclic Module Records andSynthetic Module Records. Cyclic records are modules that support dependencies. And this is some sort of abstract extract base class and our spec provides Source Text Module Records that are variant for JavaScript. For example, the web assembly imports proposals in the WebAssembly is proposing a new type of cyclic on the record, and for synthetic module records, and it’s just modules where you already know the exports and you have to wrap them with some sort of module to make them importable. The way module evolution works changed over the years. Like, originally there was this Evaluate method that would—it was on all module records, and it would trigger evaluation, and if there was an error returned a throw completion, otherwise a normal completion. But then when we introduced the top-level await, we changed the method to return the promise with the detail that only cyclic module records can actually await. If there’s any other type of the module records, like any type of custom host module, there’s a promise in there, returned by the Evaluate method, and this promise must already be settled. So the promise there is just to have a consistent API signature, and not actually used as a promise. + +NRO: And given that this promise is going to be already settled in the module revelation machinery, we just—whenever we have a module record that’s not a cyclic module record, we just look at internal slots of this promise to see if it’s rejected and extract the value that it’s rejected with. You can see from here, we only use `promise.[[promiseResult]]` to get the value that is promise, and it’s normal, and we look at its internal state. + +NRO: And this causes a problem. Because given that we’re not reading this promise using the normal promise abstract operations, when this promise is created by the host, if this promise rejects, it will call host promise rejection and the host, hey, this promise is not handled and it’s rejected, and then we never tell the host that the promise has been handled because we never called the promise down, which is the AO responsible for calling the host hook. So the host doesn’t know that it actually here we took care of this completion. So I have, like, for example, these three modules in this. We have a module that does dynamic import for `a.js`, and it depends on some B module. This B module is not a JavaScript module on the record. It’s a matter on the record and managed by the host. It creates a promise, rejects and it returns the promise as part of its evolution, so when it’s rejected promise, it calls the host promise rejection trigger hook telling that the promise has been rejected. + +NRO: Then during the evaluation of `a.js`, we perform the steps from the slide before, and we look at the error and we do not call HostPromiseRejectionTracker—oh, here it says "rejected", it should be "handled" in the promise hook. And then we create—in the meanwhile, dynamic import creates another promise for the evaluation, not just of b, but of the whole module graph for `a.js`, and in the module on the left, we handle this other promise, so the promise for the whole graph A is handled and we never handled the promise for module B. + +NRO: So the fix here is to just change these InnerModuleEvaluation abstract evaluation to explicitly call the host hook that marks the promise as handled when we extract the rejection from the promise. And, well, editorially, I’m doing this as a new AO because it's used by the import defer proposal, and we’re going to have it inline in the Module evaluation algorithm. + +NRO: Are there observable consequences to this? Yes and no. Technically this is a normative change, as example before, this is observable because it changes the way host hooks are called, and usually they affects how some events are fired. However, on the web, the only non-cyclic module records we have are syntactic model records and we already have the values, we already—we’re just packaging them in a module after creating them, so that promise is never rejected, and this is not observable. Outside of the web, we have commonJS, and when you import from a .cjs file, it would be wrapped in its own Module Record and we evaluate the particular CJS module in the `.Evaluate()` methodevaluation of the module record. However, NodeJS does not expose as rejected through their rejection event the promise for that internal module, because maybe they don’t actually create the promise, and don’t know how it’s implemented. So Node.js already implements the behavior that would be—that we will get by fixing this. Node does not implement the bug. So, yeah, to conclude, is there consensus on fixing this? There’s the pull request ([#3535](https://github.com/tc39/ecma262/pull/3535)) already reviewed in the 262 repository. + +MM: Great. So I’ll start with the easy question. The—you mentioned the situation where the promise—there exists a promise that when born is already settled, and I understand why, and it all makes sense, I just want to verify that it does not violate the constraint, the invariant that user code cannot tell synchronously whether a promise is settled or not. That the only way—the only anything that user code can sense is asynchronously. It finds out that a promise is settled. Is that correct? + +NRO: It’s correct, because the way you can check this is through the dynamic import, and you get a promise anyway. And also this promise is not a promise that was provided by the user. And it was just a promise that was provided by a spec to a spec. + +MM: Great. And the concept of internal promise or promises which respect fictions leads me to the more interesting question, which is the one that MAH posted on the PR that you already responded to. Could you recount that, and then I’ll respond after that. + +NRO: Yes. So MAH was asking if internal spec promises are observable to the host hook. And I believe unfortunately the answer is yes, because if you reject a promise, it will call this host hook, and it’s just the host that will have to know, "oh, this is an internal promise, let’s not give it to the user", which I know is not the answer you’re hoping for. And it’s not just this specific module case, it’s about all internal spec promises. + +MM: You’re right it’s not the answer I’m hoping for. It’s up—it’s only being directly made observable to the host hook, and it’s only being indicted observable to JavaScript code according to the behavior of the host hook. The problem is that right now, the behavior of the existing host hooks for this do reflect it back to Javascript code, these internal spec promises do get [INAUDIBLE], and as do promises that can be observed by JavaScript code, and I’ll just say, we’re rather aghast at the idea of the spec causing what were spec fiction concepts to need to be reified as promises that become observable by user code. + +NRO: Yeah, I guess I agree. I don’t know if hosts actually expose any of these promises, though. I didn’t check, outside of this one use case. + +MM: Okay. + +MM: The promises that are fieldwork fictions in the module machinery, remain unobservable and do not—are not reported as either handled or not handled or just not recorded? + +NRO: Are you asking me or in general to the committee? I feel like this is a larger discussion. + +MM: I am asking you first, I think it could be.. I think we should I just don’t know in depth. If you think we should and could I recommend that we do. + +SYG: I have a clarifying yes, and what is a spec fiction promise in this case? Is it something that is synchronously accounted—like you can write an async loop can count how many of you went back to the microtask queue and that is observable and with so which tick is scheduled or not scheduled becomes observable when you basically lead it with like a `for await` loop that will count many. + +MM: That is an interesting intermediate case and thanks for raising that and I was not thinking about that. And so, now what I think of as a object which is a spec fiction verses is whether the user code itself can get access to the object and get connected to the object. Does the object become nullified? And an object that is only observable behaviour, observable by user code is additional ticks on the queue, those could be explained by just advancing the ticks by a means or it could be explained by promises with spec fictions and still call them a spec fiction and same way that other objects are spec fictions. And they have observed effects which is why that we have them at all, but your code can never get a hold of them. And the regional example that I became aware of from the distinction is the sync to async adopter thing which is only ever itself explained as an additional object but there InOrder Successor way for user code to to get ahold of the object. + +SYG: I think remedy in this case would be by because you would like—because the question you had asked NRO can aged rid of them and can you do to so, and to clarify your preference would be if you can get rid of them by getting rid of them, you mean remove the spec from even constructing such promises but keep the observable behaviour the same with other explanatory means? + +MM: That would satisfy my constraints and I am not suggesting that we need to do that. If there is some other means by which the spec fiction promises can distinguished by this PR so that the spec promises reject or not reject the status and not reflect to the JavaScript service code that would be satisfying and the mildest thing that you would be satisfying, and I am not sure that I am happy with this, but I will suggest it to put it on the table. Which is sense it depends on the behaviour of the host hook, whether to reflect the report back to JavaScript code, simply making the spec promise observable to the host is not yet in violation of the language in variant and that will leave it to the host whether or not to violate and the path of resistance if we just accept this PR and in nonnormative note is that hosts will reflect the spec fiction promises back to the user code, the way they reflect or other promises back to user code. So if somehow we were able to make clear that we are advising hosts not to reflect these back to the user code, and providing in the host hook, enough information for hosts to make that decision, that would likely satisfy the concern that I have. And— + +CDA: Okay a quick one of—we are about 8 minutes passed time for this topic. + +NRO: In this specific case the promise is provided to us by the host in the `Evaluate()` method of the module record and so we don’t know if the promise is fictional or a promise that is supposed to be used in some other way and it is almost created by this instance. + +MM: I understand that technically, but do we in terms of what the practical status quo on the ground is, do we know any host behaviors where these particular promises do get exposed to JavaScript user code? Other than by the rejection tracking? + +NRO: I don’t know. + +MM: Okay, so let me say, I am in favor overall of the direction of this, but I do feel like with us being out of time, I need to withhold consensus until we observe this Issue. + +NRO: Let me see if we can talk and come back to the meeting on the last day. + +MM: Okay. + +KG: MM, I cannot imagine any outcome here where the particular behaviour being changed isn’t part of whenever it is that you are looking for. So it seems like we have consensus on this particular change though there is other changes that you like as well, and the change here is a change is a change to a particular piece of behaviour. + +MM: Okay that is a very good point, that is a very good point. And the things I want to not to be observable are already observable just with the wrong cracking and we are fixing how these inappropriately observable promises and fixing how they are tracked rather than fixing whether they are observable, is that correct? + +KG: This is causing them to be tracked in a way that results in not observable in practice, even though in some sense they are actually observable to the host. + +MM: I am not worried about a malicious host, the issue is a host—existing host and the host that follow path of this resistance and implementing this once it is in the spec, and you are causing inadvertent observability in these promises. And so, so, yes, we might agree to consensus during this theory and I will withhold now for one final reason which is simply, this objection is raised by MAH who cannot be present at this moment but will be present during the plenary. So, if MAH—to KG given your point, if MAH agrees, I am happy with consensus. + +CDA: We are at time and we need to move on and so SYG is on the queue. + +SYG: Thank you NRO on a very clear presentation on the problem and a lot of machinery is messy and this is extremely clear, thank you. + +CDA: Yes DE is talking about a follow up topic and yes we can schedule a continuation for this. + +### Summary + +* When using some types of non-JavaScript modules that throw during evaluation, the current spec does not call the HostPromiseRejectionTraker hook to mark the promise returned by .Evaluate() as handled. +* The normative PR fixes it by explicitly calling the host hook. + +### Conclusion + +Explicit support from multiple TC39 members including SYG. Blocked by MM due to a concern from MAH about spec-internal promises being exposed to user code through host hooks; follow-on topic to continue later + +## Note about changed behavior of `Array.fromAsync` after landing #2600 + +Presenter: Kevin Gibbons (KG) + +* [proposal](https://github.com/tc39/proposal-array-from-async/issues/39#issuecomment-1526744932) +* (no slides) + +KG: Okay, let’s see, so, all right, so I mentioned during the updates and we have this very old PR ([#2600](https://github.com/tc39/ecma262/pull/2600)) and to recap what this PR does: When you have a `for await` loop that iterates over a synchronous iterator or iterable that will yield a promise that will reject—the original behaviour was that the `for await` loop would treat that as an async iterable throwing which is a violation of the iterator protocol. Which is to say that the loop assumes the iterable has a chance to do any cleanup that it needs to do before yielding such a promise. And this is not the case for sync iterables. And so to ensure the synchronous iterator will have time to clean itself up, the change here was that now we close the iterator when it yields a rejected promise. The wrapper which is lifting of the sync iterator to an async iterator checks if the sync iterator yields a rejected promise and closes the underlying iterator, on the assumption that the consuming `for await` loop would not close the iterator itself. + +KG: And that is a very good change. There is an invariant we are supposed to close iterators 100% of the time when we are done with them, and this is a necessary change to achieve that. + +KG: So there is also an outstanding proposal, `Array.fromAsync`, which is Stage 3, although I do believe it does have implementations in all browsers, which is basically a `for await` loop which will collect values. And it in fact uses the same spec machinery as `for await` loops. So when we made this change to the machinery for `for await` loops, it affected the behavior of `Array.fromAsync` when consuming a sync iterator which yields rejected promises. + +KG: So this PR had the consequence of the behavior of `Array.fromAsync` changing. It's not obvious from looking at the PR because `Array.fromAsync` is not in the specification, and it is not obvious if you are looking at `Array.fromAsync` because nothing has changed from `Array.fromAsync`. But we changed a bit of the machinery `Array.fromAsync` was using, and the machinery was not in the same place as the thing that was using it, and so I wanted to put that on the agenda to call out the distinct change that happened so no one is surprised. + +KG: I believe the champions are in the process of getting tests written for this behavior, and I don’t know if there was a test for the old behavior, and it hopefully should be a straightforward change, and in some engines, they might have been using the same machinery internally as well, and it might have gotten fixed automatically. But this is a heads up about this weird case where we made a number of changes to the machinery that the proposal was using, and that changed the proposal, and I don’t know if that has come up before. Yeah, that is all I had to say. + +JSC: Like KG said, there is a test pull request for test262 already open. This is a testable observable change. V8 already should pass it, while other engines I tested do not yet pass it. Also, work on `Array.fromAsync` has resumed. Hopefully it will reach Stage 4 within the year. That is all. + +USA: That was it for the queue. KG, would you like to conclude? + +KG: There was no request for consensus, so that's all. + +USA: Yeah. All right, I guess that is it then. + +### Summary + +The committee is advised that landing [tc39/ecma262#2600](https://github.com/tc39/ecma262/pull/2600/) resulted in a change in the behavior of the widely implemented `Array.fromAsync` proposal despite no changes in its spec text. Test262 tests have been updated at https://github.com/tc39/test262/pull/4450. + +### Conclusion + +This was just a notification to the committee; no consensus needed + +## `AsyncContext` Stage 2 Update + +Presenter: Andreu Botella (ABO) + +* [proposal](https://github.com/tc39/proposal-async-context) +* [slides](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/) + +ABO: So, so this is an update on `AsyncContext`, focusing on the use cases and some updates from the web integration and after some negative feedback that we got from Mozilla. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_13#slide=id.g3484e1b5507_0_13) + +ABO: And first of all, on the use cases. When we were talking about the proposal previously, the things we were focusing on were: `AsyncContext` is a power user feature meant for library authors (such as OpenTelemetry maintainers) and not so much for the majority of the web developers. And one use case is enabling tracing in the browser, which is currently only possible in Node.js through AsyncLocalStorage, or other runtimes that implement it such as Deno or Bun. With AsyncContext, this would be possible in the web as well. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_19#slide=id.g3484e1b5507_0_19) + +ABO: And all of that is correct. However, there are two clarifications on the use cases that we have not made as strongly as we are making now: + +* `AsyncContext` would be used by library authors to improve the user experience of library users. +* And that `AsyncContext` is incredibly useful in many front-end frameworks works, regardless of the tracing use case. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_25#slide=id.g3484e1b5507_0_25) + +ABO: And so we have actually had some conversations with some frontend frameworks, and we are covering here some of the highlights. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_47#slide=id.g3484e1b5507_0_47) + +ABO: The current status in frameworks is that you have some things where you have confusing and hard-to-debug behaviour. And, for example with React, if you have a async function as the transition callback and you have an `await` inside of it, anything after the `await` gets lost and is not marked as a transition. And in the React documentation it says this is a limitation of JavaScript, and it’s waiting on `AsyncContext` to be implemented to fix this. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_70#slide=id.g3484e1b5507_0_70) + +ABO: Another thing some frameworks do to avoid this is to transpile all async code. This can be as simple as wrapping `await` with this `withAsyncContext` function, in the case of Vue. And that will let them deal with things, but you need to transpile everything through the entire code base, possibly including third-party code. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_1_1#slide=id.g304d6459cbf_1_1) + +ABO: So about the use cases for certain frameworks: React has transitions and actions. If you have `async` inside one of those, React would need to understand that it is a series of state changes that should be coordinated together into a single UI transition. The alternative for this is having developers to pass through a context object to every related API, which would be easy for them to forget, or compile everything which like transpile everything, which for React would be invasive and a non-starter. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_1_16#slide=id.g304d6459cbf_1_16) + +ABO: In the case of Solid.js, they have a tracking scope and an ownership context. Since this is a signal-based framework, they use this to collect nested signals and handle automatic disposal of them. And if you have `await` in them, you will lose both contexts. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_1_32#slide=id.g304d6459cbf_1_32) + +ABO: For Svelte, on the server they have a `getRequestEvent` function to read information about the current request context. They’d also like to have a similar thing for client-side navigations, but that is currently impossible. Once again they could do this by transforming await expressions—again, transpilation—but they can only do that in certain contexts, which would lead to confusing discrepancies. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_1_44#slide=id.g304d6459cbf_1_44) + +ABO: In the case of Vue, there is an active component context which can be propagated with await, but it only works when you have a build step with Vue single-file components, and not in plain JS. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_79#slide=id.g3484e1b5507_0_79) + +ABO: If you have any cases that are relevant to front-end frameworks, and like to share them, please jump on the queue. It would be good to share them to convince implementers that this is really useful and would be worth the complexity. + +CZW: I would like to highlight Bloomberg’s internal use cases. We have an internal application framework called R+ and we actually use a mechanism to instrument the internal engine so that we don’t need to transpile user code and we can run multiple application bundles in a single JavaScript environment. We call this co-location, and this allows us to save resources and improve performance, given that we don’t have to create a bunch of new environments for each application bundle, and there is no RPC between them. + +CZW: In order to support colocation, we use this internal mechanism, which is similar to `AsyncContext`, to track callbacks and promises created by each application bundle and we use this context information to associate app metadata. And this is crucial for us to improve our web application and developer experience because they don’t have to pass any of this application metadata around to support our co-location feature. So this feature is really important in Bloomberg’s cases. + +SHS: Google uses a polyfill of this for interaction tracing and, secondarily, performance tracing. It's critical for us because we use frameworks with a lot of loose coupling, so that there aren't a lot of direct function calls where you could expand the parameters to pass additional tracer data explicitly. Examples of this kind of loose context would be event listeners, signal handlers / effects, and RPC middleware. In all these cases there is no way to pass tracer data explicitly. Beyond that we are hoping that once the proposal is further along, we have a number of other possible use cases like cancellation where you could have an ambient AbortSignal, would be a really useful thing to have but that is lower priority and so we're less interested in taking quite as big as a risk for using that while it is still experimental. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_98#slide=id.g3484e1b5507_0_98) + +ABO: Thank you for sharing your use cases, and now I will give an update on the web integration. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_103#slide=id.g3484e1b5507_0_103) + +ABO: So the last time that we presented this in full was in Tokyo, and we gave a brief summary of the changes since then in December; but basically, one of the things that Mozilla highlighted for this proposal was that it increases the size of potential memory leaks. + +ABO: If you have this in the web, this code used to only keep alive the callback and any scopes that closes over. If there can be any click event, the callback is not a leak, and for the scopes it closes over, it is only a leak if it keeps alive things that are not used by the function. And I know that sometimes engines keep more things alive than they should for closed over scopes, but that is a trade-off they make. + +ABO: In the proposal as we presented it in Tokyo, `addEventListener` implicitly captures an `AsyncContext.Snapshot`, and a lot of the entries in the snapshot, a lot of those values will not be used by the callback, even if the snapshot itself is used, so this could be a leak—or will be a leak in most cases. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_119#slide=id.g3484e1b5507_0_119) + +ABO: And so the proposal has moved towards a model where the context always propagates from the place where the callback is triggered. So here you have a `click()` method on `HTMLElement` which causes a click event to be dispatched synchronously. And as part of that click, that propagates the context from the `click` call to inside the callback, and it only stays alive if the event listeners are dispatched and that is it. + +ABO: If you have events that are dispatched async, like on an `XMLHTTPRequest` object, when you call `send()` that context will be stored for the duration of the HTTP request, and when it fires the final event it can be released. So this is where we are calling the dispatch context. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_142#slide=id.g3484e1b5507_0_142) + +ABO: For some APIs there is no difference and the callback is passed at the same time that the work starts which will eventually cause it to trigger. The simplest example is `setTimeout`: in the old mental model, you pass the callback into the web API and thus it captures the context. In the new mental model, `setTimeout` starts an async operation to wait and then call the callback, and it propagates the context through that async operation. The behavior is the same, and it’s just like that for any APIs that take a callback and schedule it to be called at a later point. They will have the same behavior, so we can think of it with the new mental model. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_0_0#slide=id.g304d6459cbf_0_0) + +ABO: And for any API’s, the new behaviour should be what you would get if the APIs were internally implemented in JavaScript using promises and no manual context management. You can have an implementation of `setTimeout` that does a sleep and then calls the callback, and this would have the same behaviour. And if every API works like this, if we make all web platform API’s behave like most other APIs that developers will interact with, it will reduce the cognitive overhead of having to think of the context. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_161#slide=id.g3484e1b5507_0_161) + +ABO: Now, in some cases execution of JavaScript code is not caused by other JavaScript code, and then there is no context. So if you have a user click that will trigger the click listener, then there is no context, because the source of that event does not come from JavaScript but comes from the browser or user. And this would be the case for events coming outside the current agent. And in this case, this JS code would run in the “root context”, with all variables set to their initial values, and the same as if you start an agent—the context that it would have at first. + +ABO: Now there are some cases where you have regions of code—for example, in the case of server-side, to track a particular request—and you want to identify the different regions of code. And if you have something like one of these events that run in the root context, it would lose track of which region it’s in. So because of this we have a scoped fallback mechanism to provide fallback values which would be independent for each `AsyncContext.Variable`. And you have an API that would set this for each `AsyncContext` variable and it will store the value at that point, and it would be set for any event listeners that are registered within that region. And so the context would have all variables set to their initial values except for the variables which have fallback values. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_174#slide=id.g3484e1b5507_0_174) + +ABO: And here you can read more details about the web integration or the memory aspects of the proposal. + +SYG: So I did not quite understand, clarifying question. I did not quite understand how the new mental model keeps working for the `setTimeout`? Maybe it helps more if we go to the proposal slide (17), could you explain if the callback is no longer a thing that captures the `AsyncContext` at the point when `addEventListener` is called, something still has to propagate the original `AsyncContext`. How can the behaviour not be changed from the current mental model if the callback no longer captures the AsyncContext? + +ABO: Because in the Ecma-262 part of the proposal, `await` propagates the context from before the `await` to after the `await`. + +SYG: Oh I see. And my follow up here—maybe just walk me through, how this meaningfully reduces the time which context would be kept alive based on the leak concern? + +ABO: In the previous proposal for web integration which we covered in Tokyo, calling `addEventListener` with the callback would store the context that was the current context when `addEventListener` was called, and that would stay alive forever unless you called `removeEventListener` with the same callback. + +SYG: Does it object to the `setTimeout`—the click thing I understand because you changed it to propagate the root context instead of capture context and you just remove the capturing, but does this behaviour change `setTimeout`? + +ABO: For `setTimeout` there is no difference, but have this here because we are describing this with a new mental model, and with this mental model, the `setTimeout` behaviour is the same. + +SYG: That makes sense. + +DLM: First off I would like to thank the champions they have done in putting together this presentation and reaching out to people that are involved in frameworks and so with that being said, we still continue to have some concerns around web integration and I mean, it seems at least you know our concern is it’s going to be a large amount of work to implement. And I think the use cases are better stated now and I don’t know if that has fully changed our calculations in terms of whether use cases justify what we see as a very large implementation. One thing I do kind of see in the frameworks use cases, it does not appear that people necessarily are looking for web integration of APIs and this is kind of like the basic—not basic but the more linguistic JavaScript functionality, and with that being said, I think that we sort of represented in our point of view and not necessarily if this meeting but from other implementers and see if they share the concerns about the amount of work that might be involved in the web integration. + +ABO: I think SHS’s was one example of a framework that did need the web integration if I understood correctly? + +NRO: Yes, so, thank you. DLM, is there any suggestion of how to change the web integration? Knowing it would make it easier to adapt it the right way, if we know what the right way would be. + +DLM: I don’t have a specific suggestion. There is some work done to address our concerns of the memory leaks but I think there is issue on the queue as well and I feel like we are going to have to change a very large number of APIs and there would be you know there is like with the two different potential context and this is like a manual process that we are going to have to change, a lot of it, yes. I don’t have a simple suggestion of how this would work. + +SYG: That sounds confirmation to the answer on the queue that a lot of work is on the number of API’s that of implementation that needs to be context aware. + +DLM: Yes that is correct and at least in our initial analysis, it feels like it was a case by case basis rather than like one you know—it was not just one place to change things, it feels like we would have to do things not for individual API but in a number of different places depending on the type of API. + +SHS: I don’t remember quite what you are thinking about in terms of web integration, but I will say that we do want to make sure that the context actually propagates coherently across both language built-in’s and web API’s. + +DE: Definitely a goal for the web integration design was to be consistent and hopefully, could be just kind of mostly inferred from WebIDL. All of the stuff of falling back to the root context is a simplification versus previous versions, and is towards the “doing nothing” direction, away from trying to solve all of the things. I hope that Igalia can show this in a generic way, rather than implying per-API work. That work on generic framing has not been completed yet, but that is the direction. We have a principle that everyone can intuit the behavior. In writing the spec, it could be centralized in like one or two places in web–ideal and for implementation. That is the goal, and I guess presenters are being conservative now because it has not been totally proven out, but I understand it would be necessary to meet those criteria before this can be accepted. DLM, if the context were propagated in this regular way, would that make it acceptable? Is that the kind of thinking you are looking for? + +DLM: I am not sure if it can be done that way, but yes, I think that would address a lot of our concerns. + +NRO: So it is not possible to do in general, if it is possible to do for setTimeout but not for the event. It can be done semi-automatically in specs, but not through something you can auto-generate with WebIDL. + +DE: If Events can be changed in one way then that still meets the criteria that I am saying. So let’s think offline. This logical principle that developers can follow and that spec writers can follow is a positive step. I am looking forward to seeing how it is proposed that you update spec. I know this is something you have been working on, and I am looking forward to seeing it. + +SYG: DLM, to your earlier question about position: currently is so in the beginning, we have the Chrome, and shared a lot of concerns about memory and about complexity and not just leaks and you would need to map a word to keep the context and maybe like a tree of context, so the leak of the usually implementation concerns but currently we are positive despite those concerns and I don’t think those concerns have gone away and we are engaged and despite the concerns and in net and any API is there net in the browser and ABO and the champions and testimonials and it is truth that the each of API is small but the framework and library and products that are ego are to adopt this ASAP and the reasons they are adopting it is explicitly for components of liability and currently it has a pretty wide reach of users of the web and because of that alone, I think it is worth positive on it and the amount of work granted and I cannot see I am happy either but I am happy to see the way it is going and produce it and not hurt the primary goal. Our position to reiterate is positive and the pay off here, I think demonstrated to be not species and there is multiple people on the record that would say we will adopt this which is relatively rare for the things that we are pushing. + +DLM: Thank you. + +CDA: That is it for the queue. + +ABO: Yup, so, this was basically it. This was the Stage 2 update, and you can read more details on these links. Thank you. + +### Speaker's Summary of Key Points + +This presentation focused on two main updates, addressing part of Mozilla's negative feedback about the complexity of the proposal and lack of use cases: + +* feedback from frameworks, about their use cases and about their need for `AsyncContext` to improve the DX for their users +* some changes to the web integration to reduce the amount of snapshots that get captured and kept alive for too long + +### Conclusion + +Multiple frontend web frameworks are eagerly waiting for `AsyncContext` to ship in browsers, to enable async/await in developers’ codebases without breaking framework-level tracking. However, while the use cases have been found convincing, it's still not clear yet that they are worth the implementation cost required by the proposal’s web integration. Different browsers have opposite opinions about this tradeoff. + +## Temporal Update + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-temporal) +* [slides](https://ptomato.name/talks/tc39-2025-04/#1) + +PFC: One day early which is something you can calculate with Temporal! My name is Philip Chimento and I work at the TC39 member Igalia, and we are doing this work in partnership with Bloomberg. I brought the news last time that Temporal is shipping in Firefox and it is available in nightly builds now. There have been some open questions raised about how to coordinate specifically the behaviors that in the spec we call "locale-defined." We are making sure that those are sufficiently coordinated between implementations and TG2 is addressing those questions. We will continue to analyze the code coverage and answer any questions that implementations have. + +[Slide](https://ptomato.name/talks/tc39-2025-04/#3) + +PFC: I have this graph every time showing the percentage of test conformance for each implementation that has implemented Temporal. We added more tests since last time so the baseline goes down slightly. But looks like GraalJS and Boa—particularly Boa—have made specific gains in conformance to the spec. Some of the bars have gone down by imperceptible amounts but the graph looks on the whole fuller than it did last time. Obligatory note that this is not percentage done but percentage of tests passing. + +[Slide](https://ptomato.name/talks/tc39-2025-04/#4) + +PFC: I wanted to highlight some new information about the use of BigInts in the spec. Previously there were concerns about this, and I showed in a previous presentation that you do not need to use BigInts internally, but you can use 75 integer bits and divide that however you like over 64 or 32 bit integers. I ran across an interesting paper recently which is cited here, and did a quick proof of concept that represented epoch nanoseconds and time durations each as a pair of 64-bit floats. So you don’t have to deal with nonstandard size integers. Just give me a shout if this is interesting to you for your implementation. There is a proof of concept in JavaScript using two JavaScript Numbers, and it does all the necessary calculations correctly, including the weird extra precise floating-point division in `Temporal.Duration.prototype.total` . + +DLM: I just wanted to say that we're planning to ship this in Firefox 139. + +SYG: What is this locale dependence? + +PFC: This one specifically is about the era codes that CLDR provides. I can link the issue if you want to read up on it. + +SYG: I am wondering given that this—like all of Intl depends on locale data, what is special about this case? + +PFC: Let me pull up the issue. So there are a couple of issues in the Intl Era and Month Code proposal, which is a separate proposal that we hope to present at the next meeting. One of the issues is where the year zero starts in the eras of various calendars. Another one is the constraining behaviour for nonexisting leap months, which is calendar dependent. These are things that CLDR does not necessarily define currently, and it should. So the issue is agreeing on the behaviour that CLDR should have so that gets reflected in the various internationalization libraries that will get pulled in by the implementations. ([tc39/proposal-intl-era-monthcode#32](https://github.com/tc39/proposal-intl-era-monthcode/issues/32), [tc39/proposal-intl-era-monthcode#30](https://github.com/tc39/proposal-intl-era-monthcode/issues/30), [tc39/proposal-intl-era-monthcode#27](https://github.com/tc39/proposal-intl-era-monthcode/issues/27), plus various bikeshedding threads about updating the era codes provided by CLDR) + +SYG: I see, makes sense. + +PFC: If there are no more questions, I think we can conclude and I will put a summary in the notes. + +### Speaker's Summary of Key Points + +* Firefox 139 will ship Temporal. +* Boa and GraalJS have substantially increased their conformance with the test suite. +* There's a proof of concept available for doing all the BigInt or mathematical value calculations in the spec, using a pair of JS Numbers. +* TG2 is discussing some locale-specific behaviour in the Intl Era and Month Codes proposal. + +### Conclusion + +Temporal is at Stage 3 and ready to ship + +## Composite Keys for stage 1 + +Presenter: Ashley Claymore (ACE) + +* [proposal](https://github.com/tc39/proposal-composites) +* [slides](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/) + +ACE: So hi, I am Ashley. I am one of the Bloomberg delegates and I am excited today to be actually proposing something today. I have presented, I think, three times on this design space, never proposing anything, just trying to share my current thoughts and to elicit feedback. And particularly, based on the feedback and the conversations we had in Seattle, I felt like the time had come for a proposal, and then here we are. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g334e668a325_0_0#slide=id.g334e668a325_0_0) + +ACE: So this follows on very much from the previous presentation I gave in February, and the ones before that. So I don’t want to recap too much stuff from those. I will do my best to make this accessible to as wide a group as possible, but I would encourage people to look at the previous slides if they feel like they do need more context. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g34a6082a0da_0_0#slide=id.g34a6082a0da_0_0) + +ACE: So I will be asking for Stage 1 and some people might think, "a lot of this is very similar to records and tuples, and that’s a Stage 2 proposal, so what is going on?". Separately from this session, I put on the agenda a request to withdraw the Records & Tuples proposal, and this current agenda item is for a new proposal that I see as a reimagining of a very similar problem space. And I think it’s significant enough of a reimagining that it just makes sense and it’s easier all around to start from the start as Stage 0, see if we want to do Stage 1. With a new kind of branding, even if we end up calling things records or tuples this is the best way process-wise, not only for us in the committee but the general JavaScript ecosystem to help everyone follow what is happening. + +ACE: So I don’t want to focus too much on Records & Tuples being withdrawn, I have a separate item on the agenda for that, which is currently set for tomorrow (note: it ended up happening later the same day). + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g2af82517ce6_0_26#slide=id.g2af82517ce6_0_26) + +ACE: This problem space I keep referring to, it’s about this situation you may find yourself in. You have got objects that represent the same data. Two positions, both representing the same coordinates. But when you put them in a Set, you still have two things in that Set. I am using Sets here because it’s easier to talk about but it’s the same with Maps. Sets and Maps work great with strings and numbers, but when you have an object it really only works if the thing you care about is the object’s identity. Not the data it represents. This is unlike other languages, where it's common to be able to override that behavior for objects. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_7#slide=id.g3479f757b84_0_7) + +ACE: So what do JavaScript developers do today? So there could be a library solving this, but I think what I see a lot of is, no need to reach out for a library when we have `JSON.stringify`. So this gives people a seemingly really quick fix of this problem. Because now, I add my two positions in the set and the set is size 1. But I now have so many other problems that I am perhaps not even aware of because I am copying how I see other code handle this and am just falling into the same trap as everyone else. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g2af82517ce6_0_39#slide=id.g2af82517ce6_0_39) + +ACE: So `JSON.stringify`, it’s impacted by key order. You have two objects implementing the same interface but created in different areas of a codebase and have different key order so they stringify to different strings, it’s not safe. Also some values will throw, BigInt for example. Other values can be lossy, `NaN` becomes `null`, and there are other examples of things losing information when they become a string. And also, not in all cases, but it’s easy to think of lots of cases, where the string representation of something occupies a lot more memory. And also at the end of day, you have a string. So your sets and maps are now filled with strings. If you want to iterate over those and do something with it, then you want to go back the other way, turning the string back into an object. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g34a0f634362_0_0#slide=id.g34a0f634362_0_0) + +ACE: So this is all not great. And it’s a bit of a problem. CZW actually reached out to me after seeing these slides and said that they do exactly this in the OpenTelemetry package, and this is a snippet of it—they have this whole custom HashMap but I am just showing part of that code here. It uses `JSON.stringify` and stores two maps so you can do the reverse mapping. And you can see here, they have taken into account one level of sorting the keys. Because they know that these objects just have one level. So I am not just making this up. This is what people do today. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_17#slide=id.g3479f757b84_0_17) + +ACE: So what am I proposing? So I am going to propose something that maybe looks more like a solution. And that’s maybe wrong, why am I proposing a solution when we should, at Stage 1, be focussing on a problem? The reason I am proposing something that looks like a solution is, one, we have been talking about this problem space for, like, at least four years while I’ve been in the committee and I am sure it dates further back than that. So I think the thing that is really needed here is actually, what are we doing, especially as there have been other proposals in this space, so I think it’s important that this proposal is not only saying the problems, but how it’s intending to address them. Also, because even with the things I am going to propose, there’s plenty of design space to talk about—this is by no means a complete solution. It’s just the core of the idea. The names and API can change. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g336fbd25823_0_0#slide=id.g336fbd25823_0_0) + +ACE: So what is that idea? The idea is a new thing in the language I am calling them "composites" for now. When I put one into a Set, the Set sees that the things I am putting into the Set are composites and it switches to the new behavior, where it sees these things are equal, according to how composites are equal, which I will explain later. And now, I only have one in the Set. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g337bd48536b_3_0#slide=id.g337bd48536b_3_0) + +ACE: So what are these composites? These are objects, not new primitives being added to the language. And parts of this proposal are driven from feedback we got from records and tuples, not only the implementation complexity, which hopefully you can see how there is lower complexity in the implementation, but also for the developer understanding of the language. Like, there was concern of introducing new primitives on both sides, the developer experience and the implementer experience. So these are not new primitives. They are objects. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_33#slide=id.g3479f757b84_0_33) + +ACE: And you always get back a new object from this thing. There’s no reliance on garbage collection and GC semantics to trick the sets to saying these things are equal. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_38#slide=id.g3479f757b84_0_38) + +ACE: And they don’t modify the object.—It isn’t like `Object.freeze`. The argument I am passing in is like something that is going to be—MM gave me a useful word, we can see this as it is *coercing* the input to a composite in a way. Or it’s taking the argument as a "template" on what the composite should contain. It’s not modifying the input to become a composite itself. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_44#slide=id.g3479f757b84_0_44) + +ACE: Here I show that the function throws when called with `new`. Maybe this bit should change. But the way I was thinking of them is they’re not classes with a prototype. Instead, they are like a factory function. Maybe this is something we should discuss. Maybe during Stage 1. But that is what I was thinking, it’s not like a class hierarchy. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_50#slide=id.g3479f757b84_0_50) + +ACE: The argument you’d pass has to be an object. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_56#slide=id.g3479f757b84_0_56) + +ACE: and the composite is frozen from birth. So you can never observe a composite in a mutable state. A composite is always frozen. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_62#slide=id.g3479f757b84_0_62) + +ACE: And they’re not opaque. You can see the things that a composite holds as its constituents. So I have created a composite that has 'x' set to '1'. And then if I look at the keys on that composite, then it has a key of 'x' and I can read that and get '1' back out. If you have a map or set with composites as keys, you can iterate over them and use them as data without having to do a reverse mapping. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_68#slide=id.g3479f757b84_0_68) + +ACE: They’re generic, and by generic, I mean they can store `T`, they can store any value. They’re not like records and tuples which were primitives that can only contain more primitives. So here, you can put a Date object in, and then if I read that property back out I get the original reference to that object. It’s not deeply converting everything. It’s saying, here I have a property 'd', and that stores the reference to that date. And I am also thinking, you should be able to store negative 0 and maybe that’s another thing we should discuss, maybe during Stage 1. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_68#slide=id.g3479f757b84_0_68) + +ACE: So yes, there’s two things. One, it’s not doing a deep conversion. Also, you can store any value in here. So that means, these things aren’t necessarily deeply immutable, but they could be if everything you put in them is deeply immutable. So they don’t give you that guarantee. But you certainly can construct deeply immutable data from them. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_242#slide=id.g3479f757b84_0_242) + +ACE: As there will be a way that you can check if an object is one of these special composites. If you created a proxy of one of these, it would be false. It’s not like `Array.isArray` where you can check the proxy's target. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_156#slide=id.g3479f757b84_0_156) + +ACE: So that is like what they are on their own. I guess the thing that is more exciting about them is how they are equal to each other. So the simplest possible case… + +DLM: There’s a clarifying question in the queue. + +JLS: Just a question there. The properties passed in, are they also frozen deeply? So if I have an object, an existing object, and it’s one of the properties I am passing in, I have an example there in the queue… in the question itself. + +ACE: There’s no deep conversion. In your example, if you create a composite of a property foo that is an object it is not modified or touched in any way. The composite only contains a reference to that original object. + +JLS: Okay. + +ACE: So the composite itself is frozen but the things it references don't necessarily need to be. + +JLS: So the equality, then, that you spoke of using a composite in a set, it’s—is that equality, a deep equality? Or… + +ACE: I will come on to that. + +JLS: Okay. Thank you. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_162#slide=id.g3479f757b84_0_162) + +ACE: So yes. A more interesting example is, these two things. Both have an X and a Y. They’re all—the key order don't matter. And there is a choice on how that is achieved it just ignores the ordering when comparing, or does it kind of try and sort these when it creates them? That kind of gets us all into a bunch of questions about symbol keys. So at the moment, I am thinking, it doesn’t sort the keys. So here I have two composites. If you ask the first one for its keys, it gives X and Y, and for the second one, it gives you Y then X. But when you’re comparing them, that wouldn’t matter. There’s an issue on the proposal of if we want to do something different. But in general, the goal, however we achieve it, is that you shouldn’t have to worry about key order. That’s one of the problems we are trying to solve. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_168#slide=id.g3479f757b84_0_168) + +ACE: So the equality is symmetric. Checking if A equals B is no different from asking if B equals A. It doesn’t matter if one is a subset of the other. These aren’t equal because one has extra keys from the other. + +ACE: So to JLS's question, "is it deep?". It is deep while the kind of backbone that it’s following is still a composite. So as it’s walking, every time it sees a composite, it keeps using recursion to check if they are equal. If you have two big trees made of composites then it’s doing deep comparison. But as soon as you have something that is a regular object then you are back to pointer equality. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_180#slide=id.g3479f757b84_0_180) + +ACE: So here, these are not equal. Because the composites are referencing two different objects. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_185#slide=id.g3479f757b84_0_185) + +ACE: Whereas this is equal because they’re both referring to the same object. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_190#slide=id.g3479f757b84_0_190) + +ACE: So what does that look like in pseudo-JavaScript code? `Composite.equals` starts with this base case of, SameValueZero. Though, again maybe this is something we should discuss in Stage 1. Maybe it shouldn’t be SameValueZero. The alternative here is we have SameValue. One of those two operations. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g34c13560fa8_0_28#slide=id.g34c13560fa8_0_28) + +ACE: Then if either one of the arguments isn’t a composite, then it’s not going to be equal. Otherwise, both arguments are composite, so let’s compare them using this secondary 'equalComposites' function. + +ACE: So we first get the keys of one. Compare to the keys of the other. They have to have the same set of keys. Otherwise, we return false. + +ACE: And then we loop through the keys and recurse back to the beginning - are the values of the two keys equal? + +ACE: The main thing I want to show here is that when you are comparing composites, you have lots of opportunities to return false early. The kind of the worst case comparison is when the two things are equal, that is when you have to get all the way to the end to be confident of that. Unless you’re literally comparing a composites to its literal pointer self, in which case, that would be an immediate `return true`. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_80#slide=id.g3479f757b84_0_80) + +ACE: So the really good things about this equality is, all of these things. So it’s guaranteed to have no side effects. These things can’t be proxies. They don’t have any traps, asking for the keys and reading those values is always safe. The words I was looking for earlier, symmetric, reflexive. All of the things required to be well-behaved map and set keys. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_200#slide=id.g3479f757b84_0_200) + +ACE: So where would this equality appear? It definitely appears if you did `Composite.equals`. And then the real key part of this idea is that it kind of works out of the box for Maps and Sets. And then also, the other places that currently use SameValueZero, which would be `Array.propose.includes` and then it feels wrong if we do `Array.prototype.includes`, you also want `indexOf` and `lastIndexOf`. So we wouldn’t be changing those for existing values, they would still be strict equality unless the thing you are passing as the argument is a composite. + +ACE: So there’s no, like, web compatibility breaking changed all of these things. The semantics are identical to current semantics. It’s when the argument is a composite that it uses the new semantics. I guess that asterisk applies to all of them. Mainly, I am trying to say, for `indexOf` definitely we not changing from strict equality when the arguments are anything else. So `NaN`s are still not in arrays according to `indexOf`, but a composite containing `NaN` would be. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_208#slide=id.g3479f757b84_0_208) + +ACE: So they might also appear in future bits of spec which don’t exist yet, like MF’s proposal `Iterator.prototype.uniqueBy`, you can imagine this is the—here when you pass in the callback to say how things should be worked out if they are equal, then here, it can return a composite. So under the hood it is using a set-like thing to then filter out the duplicate values from the iterator. So there’s opportunity for this to appear in more places in the future. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g347a4abe357_0_1#slide=id.g347a4abe357_0_1) + +ACE: So equality is linear. But in some negative cases it will be faster. Internally the way people would need to implement these, and the way the example polyfill implements things, there is hashing under the hood. But it doesn’t expose that hash value in any way. When you are putting these things in a map and a set, it wouldn’t literally be scanning every composite and doing a fully linear scan. It would be doing, like, an initial hash lookup first, and then only needing to compare when there is a hash collision, when the composites are equal for example. And because these things are immutable from birth, there’s no way to create cycles in this equality. So you can kind of traverse them safely without needing to keep track of where you have already been. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g2c6eebea946_0_97#slide=id.g2c6eebea946_0_97) + +ACE: So I have a bunch of bonus slides. But I will use them if the topic comes up. I would like to go to the queue. So MM is up first. + +MM: Yeah. So the fact that `isComposite` as well as `Composite.equals` is operating on the argument instead of `this`, and that compositeness is not pure proxies, that looks like a dangerous precedent if that’s allowed to go by without examination. So I want to take a moment and explain why that’s okay in this case and it’s not—it’s not systematic of a more general principle that leads to a wider precedent. + +MM: The reason it’s okay in this case is that—first of all, the reason why it might not be okay is because of ruining practical membrane transparency. Indeed, I imagine this does actually ruin practical membrane transparency for existing membrane code—membranes built out of proxies—for existing membrane code based with composites where the membrane code did not know about the possibility of composites. + +MM: The reason why going forward, this is repairable by the authors of the membrane code, is that a viable way to restore practical transparency is due to the very passive nature of composites. Composites, none of these operations on them trigger user code. These are purely the contents of the composites is a simple, you know, simple object with frozen data only properties. And, therefore, a proxy participating in a membrane when faced with a real target, that is a composite, can simply produce another composite, not a proxy on a composite, on the other side of the membrane and that proxy on the composite has to go through all of the same issues as creating a shadow target. Insofar as ed(?), if the original composite referred to X, then the composite on the other side of the membrane has to generally refer to a proxy for X and vice versa. But that will—because there’s no user code involved, that will restore practical membrane transparency. I also want to maintain—just remind everyone that there is the big issue that doesn’t—I see somebody mentioned that. Okay. That’s all I had to say. + +ACE: Yeah. Thanks. I agree. We shouldn’t naively see this as precedent for change on a general design constraint. And this is an exception, not a new rule. + +MM: Good. + +LCA: Hey. Your slide on storing a data object in a composite, many times those are objects that you want to compare by value, not by identity. So I was just wondering whether you had put any thought into how that could work? Like, do you think the approach here is that users should like to turn these values—like, these Date objects into strings before putting them into composites? Or do you imagine a toComposite method on these objects would give you something to put in a composite or anything else? + +ACE: Yeah. So if—yeah. If we were starting Temporal in two years’ time and we already had Composites I think it would be really nice if Temporal objects were composite objects. Unfortunately just the way that the language is designed in a particular order means, yes, they wouldn’t be. + +ACE: And I think at least for Temporal, the good thing there is that—in my understanding, please correct me if I am wrong—all Temporal values do have a canonical lossless string representation. Especially now that we don’t have custom calendars. Yes, if you want to create a Composite that has a start date and an end date, then to get the equality that you probably want to turn them into a string in that case. Or construct kind of your own—a different type of composite specifically for Temporal types where the constituents of the composite are the parts of temporal type so it's not flattened to a string. But yeah. Because we can’t make all Temporal things composites, it’s my understanding, then I think this doesn’t just work out of the box unfortunately. + +PFC: I agree, it won’t work out of the box. But there are probably ways to accommodate this use case with special cases in the composite factory function. It would be web compatible to have special cases there, because nothing has ever been passed to the composite function on the web before. You could, for example, say, if you pass a Temporal object to Composite, it will ignore expando properties and read the internal slots. I haven’t thought through the implications of the idea, but it's an example of something we could think of in the realm of special cases to make these use cases work. I do see that people want to use Temporal objects as hash keys. + +ACE: Yeah. I think this problem already exists. Like, it’s already the case that if I have a `Temporal.PlainMonthDay`, I can’t use that as a temporal specific domain map key. So Composites don’t introduce the issue, but perhaps only compound it in that, now if I have two of them, I also can’t compose them together, because even on their own their equality in a Map is that of object identity. + +ACE: I would like to move on to WH. + +WH: I just wanted to double-check that, no matter what you pass to `Composite.equals`, it will not run user code? + +ACE: Correct. There’s nothing about it—assuming a well-behaved implementation, isn’t going to do something, the spec says it shouldn’t do, then yes, there’s no user code. You can check if something is a composite, then it would only read the properties and interact with the object if the object is a composite. If the thing is a composite, then none of the methods used during equality checks can trigger user code. + +WH: Thank you. + +SYG: So we chatted about this with the V8 folks. The biggest piece of feedback we had was an alternative designs which canonicalizes in the factory function. In that canonicalization, to de-duplicate in the constructor function using the semantics or equals you have laid out and returned the canonical object that is the duplicated and can that, the performance is different. You have a different bottleneck, where the canonicalization is slow. Because you—for the same reason equals is linear in the worst case, in the worst case here you would have to check against this table. And because this is kind of canonical with respect to everything that you might create, the domain you are comparing against is possibly larger. On the other hand, you get other very nice benefits like, you can continue to just use === everywhere because it’s a canonical copy of the object, and as an object, it’s just pointer identity. Nothing else needs to change. The comparisons compare fast. + +SYG: This tradeoff makes sense if indeed it’s four keys, it stands to reason you are creating fewer keys then you can check. For equality. So what are your thoughts on that alternative design instead of the current one? + +ACE: Yeah. Guess one thing about that design whenever we have discussed that in the past is that one of the constituents has to be an object. You can’t create a pair of numbers because if you return the canonical representation of that its lifetime is infinite. If you try and put that object in a WeakMap, WeakRef, or in a FinalizationRegistry? It has to live forever because the canonicalization of two numbers is—is—it has no expiry. I wouldn't want to say you can’t create a pair of two numbers. That doesn't feel great. It also sounds like it moves all of the work to the object creation, which was one of the concerns with records and tuples. Yes, the comparison is now cheaper. If you are creating lots of these, you have to, like, eagerly kind of do all the work up front. Whereas maybe you are just checking `Composite.equals` and if the very first two keys are different, then you don't need to traverse the whole object and canonicalize them. You can see immediately they are not equal and stop working. + +ACE: I had assumed that this was off the table because of the discussions around Records & Tuples because it has a lot of the same implementation complexity, minus introducing a new `typeof`. But if that’s on the table, certainly up for discussing it. + +SYG: I will respond to some of the points. The—remind me of the first response you said + +ACE: …having an object constituent + +SYG: The WeakRef thing is true. My response to that is, I think it would make sense in the canonical—in the eager canonicalization alternative that it would not be usable as a weak target for the same confusion as `Symbol.for`. Even with composite key as as presented, that potential for surprise already exists. If people are using composite keys as, you know, a pair of two numbers, it may surprise somebody that that—to use that composite as a key in a WeakMap, it may surprise the user that that entry may be collected from under them. + +SYG: Right? If the mental model is just a composite key of two numbers, I think my intuition there is that whether we do canonicalization or you do the current proposal, there is potential for confusion. I am not sure how successful we can communicate, it’s an object that looks like any other object. That goes into a small point I see somewhere in the queue about having the—the `new` keyword if I am leaning to that. The—the difference between this proposal feedback and canonicalization was bad for Records & Tuples on the V8 embedder side, if you have a different’ type, it’s not pay as you go… if it’s a canonicalized object it’s complex [plit] pay as you go. It’s just an object. And I want to dig in through some time later, on the queue if we have time. + +SYG: And the other issue that—that gave us was the use cases, where a lot broader than just composite keys. And when I chatted with the people about composite keys as a use case, it seemed to be relatively about fewer objects, shallow object graphs which are very different performance-wise from many kinds of objects. Arbitrarily complex object graphs. I don’t think people are keying things on arbitrarily complex object graphs. So if that is an assumption, a use case we’re designing for, it seems less—a lot less problematic to bottleneck everything, all the expensive work in the constructor. Now, if you think it is still worthwhile solving for the many objects, arbitrarily complex object graph cases, then I have my doubts this is the composite key proposal, but there’s a longer conversation we can pick up later. + +DE: So I—I guess I have two questions for SYG. One question is: do you see this proposal as pay-as-you-go? Because it’s only hit in kind of this extra branch to make a comparison. Or is that extra branch considered more expensive? And also, wondering, you know, how confident are you that people won’t want this to be cheap, the allocation of this? Is there a hope derived from the use case? + +SYG: We don’t use the first case as pay as you go because of the extra branch—as a combination of the extra branch across multiple data structures. And this becomes a thing that would be common in all data structures we design, that check equality. We need a protector here or something like that. For the second question— + +DE: If case you never use the feature? + +SYG: You never— + +DE: Yes. + +SYG: Yeah. The—yeah. Okay. + +SYG: So the second question, yes. It is—I have no idea. I don’t want to say I am confident or not confident. I have no idea. If we believe are people are reaching for this via composite key, the less concern about the key creation being cheap and the lookups being cheap + +ACE: Part of me doesn’t want to think of these things as only composite keys. That certainly is the primary reason for adding this to the language. But what I wouldn’t love happening is if, you have to—you do completely separate your data from the composite key, because then otherwise, what ends up happening; every object has, like, a 'getComposite' or 'toComposite', then which is annoying if the thing is a person, and the person has an inner company field and the company is a composite. So it gets deep. It’s easier in that case to use the composites as your data. So you don’t have to keep kind of converting things to composites, when you do want to put them in a map key. So I do want the proposal to focus on the use case of keys for maps. And would like them to have the potential of how the language—where the language could go over the next, you know, 10, 20 years. Maybe these things do become something that actually forms more of the way you actually model the application. But I could see the startup application development going there, but would then necessitate the creation being cheap because you wouldn’t necessarily pay off the cost of actually doing the comparison. So that’s where I am thinking about it. Yeah. I really—I wasn’t expecting your comment. I will have to have time to think about it. + +MM [on queue]: I am uncomfortable with canonicalization . + +KG: I support Stage 1. This gives me everything I wanted from Records & Tuples. This is the case I had for Records & Tuples. I am worried that this is not the use case for, like, half of the people who wanted Records & Tuples. There’s a lot of people wanting Records & Tuples for reasons I didn’t fully understand, wanting immutable objects, but not liking `Object.freeze` or something like that. And I confess that I just never really figured out what people were excited about there. I am hesitant to completely dismiss all that. But like I said, it gives me everything I personally want. So I am supportive of this. + +ACE: Thanks, KG. + +CZW: Yeah. I want to echo that this also matches the OpenTelemetry use case, it does not only provide equality comparison, but also allows iterating the keys inside a map without keeping a reverse map to the original key object. So this is really helpful to ask as well. And support this for Stage 1. + +SYG: We touched on this a little bit. But I want to reiterate here what—what KG was saying, given so much of what we heard from Records & Tuples was kind of the different camps between people who are really, really excited about immutable data function, but people who are not clear on the performance implications here and that’s where a lot of the implementers were pushing back. I would be very uncomfortable if this API were designed to be flexible enough that people just use it for immutable functional data structures and then end up finding it’s not a good fit. So thoughts on kind of taking the use case lane that this API says it’s going for and then sticking to it. + +ACE: Okay. Yeah + +SYG: Your thoughts on that. You said both— + +ACE: Yeah. Like, the core thing I want is this kind of new capability. So I think the initial API can make that very clear, that when you are constructing these things, it’s to create a composite key and we can make sure—for example, this proposal, I am not during Stage 1, going to sneak in, let’s add some syntax for this, because when you have nice syntax, we say as a committee, use these as your immutable data. While the API is more verbose, maybe we should make it `Composite.create` to make it more verbose. It’s not going to become the default data structure. But I think if we want to explore adding more immutable data structures to the language in future, they seem like the kind of the perfect base for that. I want the door to be open. It would feel wrong, if in the future we added immutable data structures, that they wouldn’t be composites. That feels like we have—as per my previous presentations that composite keys need to be immutable. So if you then also add immutable, it might as well work as keys. So I hear what you are saying. And it’s a difficult part of the language design, and small changes in this could impact where they are used. Also, it impacts how we might want to focus on how they work on a performance perspective. I want to discuss this more with you. But maybe not when we only have 10 minutes left. I definitely hear what you are saying. + +DE: The reason you are concerned about this being a poor fit, SYG, is pursuing the canonicalization alternative and making sure that is workable? Or other particular concerns you have if this gets overused? + +SYG: I think the canonicalization thing is a possible solution to the deeper performance tradeoff concerns. I can see just very different implementation strategies for dealing with the shallow and few—like if you believe composites are shallow and composites are few, that’s very different than if you expect most of them are deep and have many composite pieces. + +SYG: I am not convinced that— + +DE: That’s the pattern I would expect + +SYG: I would then just not use the other one composite for the other cases, is my preference. + +DE: Right. But this—we are talking about the application code. Not your code. I am not sure what we could use to prevent this. Do you think we should be making this not transparent, like you can’t access what’s in it? + +SYG: It’s like one such strategy, where it’s like—the cost is upfront during creation. Therefore, it favors one kind of pattern. We want that pattern to be fast and to be the happy path. That is— + +DE: My question was, are there other things besides canonicalization that come to mind? For you. + +SYG: Not at this point. No. + +DE: Okay. + +ACE: Yeah. I see there’s a reply. One thing I was imagining engines would possibly do was calculate the hash value of these things. So the equality in the cases where you don’t have a hash collision is still not as cheap obviously as with pointer equality. But still, it’s to reduce the number of times you are falling into that kind of deeper comparison case. Yeah. JSC has a reply. + +JSC: It’s a reply to KG’s topic which popped off the queue before I could finish my reply. Does anyone else have replies to SYG’s? If not, I want to say to KG, who was wondering about the people wanting Records & Tuples for immutable data structures but not finding `Object.freeze` acceptable. I was one of the people who was eagerly waiting for Records & Tuples. I am a huge fan of efficient persistent data structures, persistent immutable data structures like you see in Scala, Clojure, Immutable.js which was inspired by them, all which can quickly create new versions of data structures with fast deep changes to inner keys without any copying. `Object.freeze` would not address this, because creating changed versions requires deep copying. However, I accept that the engines say that adding immutable persistent data structures to the core language is not practical. So I’m also fine with being able to just use composite keys in Maps and Sets. That’s my view, as someone who was eagerly watching the old proposal for immutable data structures. Thanks. + +MF: You asked about key sorting during the presentation. I want to give some feedback on that. I think key sorting is important if we were—if `Composite.equals` wasn’t doing its own equality comparison. Oh no, you can tell the difference between these even though they are equal. But because it’s `Composite.equals` and not `Object.is` , I don’t think key sorting is important. It’s not important the keys also sort the same. + +ACE: Yeah. I agree. + +MF: Yeah. That’s my opinion on that. + +MF: The next one on the, like, base case there of Composite.equals, you had it in the slides, SameValueZero, I am of the opinion that SameValue would be better here. The comparison that we have in maps and sets today, well, you can’t actually tell. They do a normalization of -0 to 0. When you put in -0, you get a 0. It doesn’t matter whether they do a SameValue or SameValueZero. But it would matter with composites because when you put in a -0, you observe it, you get a -0. Which means you should use SameValue to compare them. This would also make it way easier to—if you have a map that’s already doing, like, SameValueZero, it’s very hard to get a SameValue map. I have a library that does that. It would be—I don’t know if it would be significantly harder or just impossible to do that if composites were also doing SameValueZero. I would strongly support SameValue here + +ACE: Thank you. A decent part of Stage 1 would be unfortunately talking about -0. I thought that -0 was a thing of my past. I can see it’s a thing of my future too. MM has a reply? + +MM: Yeah. The sorting of keys is a good open and shut case about why canonicalization is impossible, if we admit anonymous symbols as property names. There is no possible canonical sorting of anonymous symbols, and if you canonicalize, then if you simply go with whichever one became canonical first, then that’s history-dependent and opens a global communications channel. So I think you can’t have them + +ACE: I was imagining them to have symbol keys. If we do that, sorting is off the table. + +MM: Therefore, canonicalization is off the table? + +ACE: Yes, if we want them to have non-registered symbol keys. + +DLM: So there’s a point of order. We have 8 minutes left. We have heard pretty positive things so far. I don’t think we have heard anything that would block asking for Stage 1. So it might be a good idea to do that shortly + +ACE: Yeah. Yes. It’s a great suggestion. If we could do that now. And then if we have any time left, we can pick a favorite topic from the queue. + +ACE: So I am asking Stage 1 for this new composites proposal. + +WH: I support this. + +ACE: Thanks, WH. + +[In the queue] Explicit support from JH. CDA, SpiderMonkey team/DLM, MF, CZW, NRO, MM, SYG. + +ACE: Any objections? + +[silence] + +ACE: Thanks. That’s really great! I’ve been mulling over this for so time. I am really pleased. We have 3 minutes left. So we can still keep chatting a little bit more. But I am shaking with excitement right now! + +EAO: Could you speak briefly about why Composite is not a class? + +ACE: So it was only because—no. It was initially because I was imagining—as I have gradually evolved my mind, from the Records & Tuples proposal, over and around and to this, I was still thinking of records and tuples. And I was thinking that there would just be this one factory, and it would kind of switch its behavior based on what you passed in. If you passed in, like, a plain object, you would get back something like a record. If you passed in an array, you would get back something like a tuple. This wouldn’t make sense if you are doing the composite and its prototype changed. But a bunch of conversations since, about kind of this whole space, and should there be kind of tuple-like composites for when you do literally just have a list of things? And giving the names doesn’t make sense. Or and you want to prototype, so you can have a map and things. Does make sense to me more about whether it should actually be `new Composite`, and you could then—this loops back to SYG’s point. If you can do a new composite, that means you can do, like, `class Position extends Composite`. And, you know, and like reflect, construct, NewTarget there is now my `Position.prototype`. And some people think that’s cool. Some people also think that’s not going to be good. And I think this is going to be one of the main things to talk about. One, - 0. And two, how we should drive this API to encourage the behaviors we want as a committee in the way these are used. Like, should it be really thought about as this particular case? Or should they be used as a general data model? And `new Composite` should be a part of that conversation. + +MAH: On the previous topic I thought there was a comment from WH on the queue and I would like to hear it. I don’t know why he disappeared in the shuffling. Or maybe it was removed? + +DLM: No, that was my fault, sorry. + +WH: `Composite.equals` is a replacement for SameValueZero. If we switched the semantics to `SameValue`, it would break Map and Set semantics. + +MAH: I think the idea here would be to use the same value to compare the composite themselves, maybe not for top level. If there is a concern that if there is a concern that plain -0 should still be SameValueZero to 0. Um, we can keep that but inside once you put that inside composites you don’t need to keep that rule. + +WH: That would confuse users. We talked about this extensively in Records & Tuples. + +ACE: I felt this topic was behind us because of the 350 comment thread on Records & Tuples, but I think I will end up doing a slide deck on this particular topic. Because it sounds like there is a variety of opinions amongst the committee. + +MM: So the reason why I think it needs to be called `Composite`, not `new Composite` is that if I saw `new Composite`, I would expect it to give me something fresh even if the input was already a composite. But if it is a constructor and—the expectation that it acts as a coercer and if you feed it the kind of thing that it produces and it will return to the thing that directly. Without creating a fresh wrapper. + +ACE: Yeah we maybe lose the ability to do that and kind of optimization. + +DLM: Okay, I think we will stop this conversation there and we are almost out of time, congratulations ACE + +### Speaker's Summary of Key Points + +* The problem of working with composite data in Maps and Sets was presented +* A proposal for adding a special object type that is compared structurally when used in Maps, Sets and some other APIs. +* There was discussion on if this helps with existing types such as Temporal, which the initial proposal does not +* There was discussion on an alternative design which eagerly interns the objects, instead of introducing new logic to existing equality APIs +* There was discussion on SameValueZero vs SameValue + +### Conclusion + +* Consensus for stage 1 was achieved +* Discussion about canonicalization and handling of negative zero will continue as part of stage 1 + +## Immutable ArrayBuffer for Stage 3 + +Presenter: Mark Miller (MM), Petter Hoddie (PHE) + +* [proposal](https://github.com/tc39/proposal-immutable-arraybuffer) +* [keynote slides](https://github.com/tc39/proposal-immutable-arraybuffer/blob/main/immu-arraybuffer-talks/immu-arrayBuffers-stage3.key) +* [pdf slides](https://github.com/tc39/proposal-immutable-arraybuffer/blob/main/immu-arraybuffer-talks/immu-arrayBuffers-stage3.pdf) +* [recorded presentation](TODO: link) + +MM: I would like to ask everyone’s permission to do my normal thing and record the presentation, including questions asked during the presentation. And then I will turn the recording off when we get into the explicit Q&A section. + +USA: Let’s wait maybe a few seconds. Seems like nobody has objected. + +MM: Great. Thank you. + +MM: In the last several meetings we have gotten a little bit of efforts and succeeded quickly through Stage 1, Stage 2, and Stage 2.7 and today, I would like to ask the committee for Stage 3. + +MM: I think it is a simple enough proposal that I don’t need to recap, but if anybody wants to ask questions about the content of the proposal or clarification or whatever, please do. Because people can understand where they are. And this was the checklist for Stage 3 based on the stage 3 criteria. And we’ve written test262 tests and submitted them as a PR, but we have not yet gotten reviews on that. And therefore, of course we have not yet merged it. And implemented feedback—we would like implementor feedback from the high-speed engines, and XS engine has done. Given implementation 262 tests and all the feedback from XS is the proposal and all of their feedback is good and we have not yet received feedback from other implementations. + +MM: So there are two things listed as normative issues and document the permanent bidirectional stability of Immutable ArrayBuffer content, immutable meaning it is not just read-only, but it's a bilateral guarantee that not only can we not mutate it but what you are seeing will be permanently stable. + +MM: So RGN added this text after the feedback from last time and the document and the permanent bi-directional stability, and the remaining thing which we have not checked yet, is we purposely did not declare the order of operations were resolved, because we wanted to find out whether there were any implementation concerns. For example, sometimes a difference in order of operations which is observable but does not matter at all to the JavaScript programmer would provide suggest implementation and we have not received any such feedback by the explanation of purpose of the stages and we don’t have to get that feedback before 2.7, if we are willing to accept this as the normative spec until we get feedback to the contrary. And the spec that we have adopted was a reaction to the previous feedback that we got, which was that `sliceToImmutable`, which is the only case where this arises, should just be literally as close as possible to `.slice()`, so we added this exception over here to keep these things as close as possible including both the order of operations and whether they throw or do nothing.So basically this is all the engines implements `slice` right now and I have not heard any complaints about `slice` doing anything inefficiently and so I would like the committee to approve that this is the one spec in Stage 3, still of course subject to Stage 3 implementer feedback. + +MM: This is the PR against test and written by PHE, co-champion and part of Moddable XS. So, this is the actual formal explanation of Stage 3, the standard for carrying the purpose. So any questions, and may I have Stage 3? + +MM: Now I will stop recording. + +NRO: So, I would prefer that we wait — + +MM: Can you repeat that? + +NRO: I would prefer if before moving to Stage 3, we would wait for tests to be merged. And the reason I am saying this is that there are two proposals in Stage 3 and they have tests pending to be merged. There’s the decorators and the using declaration proposal, and in both cases implementers are confident about the coverage because this is Stage 3. In both cases there were different bugs that would have been code in the test and they not test. It does not need to happen during plenary but if you [INDISCERNIBLE]. It will be done automatically when it is merged and I would prefer to wait and I am comfortable asking this because a few weeks ago, there has not been much material for such reviewing. + +MM: So can I ask people involved in test262, and those that committer status in test262, please do review it and then I am eager for your feedback. And is there anybody who thinks they might actually get to do that before this plenary is over? + +MM: Okay, so once we proceed on the test262 tests to the point where they’re merged, we’d come back and ask for Stage 3. And is the objection a valid objection and that is why I brought it up in the presentation. + +USA: There is a question about process by JLS? + +JLS: Yeah the question is straightforward and can we advise to advance automatically once it emerges? If we have consensus and the tests are the only reason to withhold, that advancement could be automatic? + +USA: I can help answer, we can get conditional consensus on Stage 3 and that the position advance to Stage 3 once the tests are merged. + +MM: I would certainly like to have that conditional approval, sure. + +SYG: The significant testing in Stage 3 should be merged whether it is merged on the trunk or staging, it needs to be executable. And as for conditional Stage 3 on merging the tests, I guess that’s okay if this is the only thing. In general I would like it to not be—sorry, in general I would like to minimize the number of conditional advances because that just increases the likelihood of things falling through the cracks and so we can come back since it is not in a particular hurry. + +MM: I am happy to come back as well. I don’t think postponing it for one meeting will materially affect anything. SYG, let me and you about has there been any exploratory implementation work on this proposal at Google? + +SYG: No, and we will not look at it until it reaches Stage 3. + +MM: I see. That is the reason why it would be nice to get to Stage 3 earlier than next meeting, but not a big deal. + +SYG: Without the tests and even if it is conditional stage 3, we need to see if it works. + +MS: All right, we have a reply by MLS? + +MLS: JavaScriptCore won't start looking at it until Stage 3 as well. + +DE: How complete are the tests that are out for review? I think it’s important that we have some tests merged, but are they complete enough for Stage 3? + +MM: I cannot speak to that myself. I certainly recognize the importance of the question and I just don’t know. + +RGN: I can speak to it. I am a test262 maintainer but not putting formal approval on the test, and I reviewed them. And I think they are complete enough for Stage 3. Follow ups are expected for addressing the cases in order of operations with respect to error handling, but that is common even in mature tests and there is coverage for transfer to needable but not yet for sliceToImmutable but that will be largely analogous. We could push for inclusion of this for this pull request, but for this or not it will come inside of Stage 3. + +DE: I don’t have an opinion whether or not those are on this pull request or in a separate pull request, but before getting to Stage 3, we should complete all of those follow up items and not have any known gaps. Just because we have existing coverage gaps overall does not mean that proposals can reach Stage 3 with known coverage gaps. + +RGN: In this case I think it is because those are the very things that are expected for implementer feedback is requested. + +DE: Right, so the test will be helpful to get that feedback. + +MM: So, I think all of these lines are pointing to bringing it back for Stage 3 next meeting. And which I am happy to do. And what RGN says does raise an issue of the committee feedback. RGN is both a test262 committer and a co-champion of the proposal, and he did not write the tests but reviewed them and PHE wrote them. Is there any problem with RGN reviewing this as an official test262 committee person despite the fact that he is a champion? I don’t know— + +DE: As a non-maintainer of test262, I think that’s fine, for anyone prepared to do an intellectually honest job. RGN just volunteered points for further work, which is a great demonstration of that honesty. + +MM: Good, awesome. + +DE: Unless anyone else has opinions here? + +PFC: In addition to the point I wanted to make, I will also answer the immediate question which is: I think it's fine for RGN to review that. Having the specific test262 reviewer not be a champion is not a standard that we have required for anything else. + +PFC: I wanted to take the opportunity to make a point about how to facilitate test262 reviews, not just for this proposal specifically but in general. We have some documentation about testing plans that my colleague IOA wrote which we will merge soon hopefully. I recommend to all proposal champions, before you write the tests, open an issue with a testing plan, because that will help us as reviewers to get a sense of how complete the coverage is without having to dive into every corner of every proposal, because that is the thing that that really takes the most time when we are reviewing. And also once you have a testing plan with a checklist, that will make it easier to open multiple smaller pull requests than one large one, and that helps us because currently we have a lot of maintainers that have limited time for reviews. So if the choice is between reviewing three small PRs or 20% of a large one, I think people will naturally want to review the smaller pull requests. Having them be small and marking them as done in the testing plan as they get merged, helps us get around to things faster and merge them faster. + +NRO: I think RGN should be allowed to review the request, and it is better that champions of proposal review the test and having just an approval from RGN is better than just having an approval from PFC because RGN has more context on the proposal. + +DMM: +1 on more smaller PRs for the tests rather than one giant one. The current PR is almost 1K lines and we will miss stuff in trying to review that. + +MM: Okay I will communicate to PHE offline. + +USA: That was the queue, MM. Would you like to ask for consensus? + +MM: There is no consensus to and for, and I think what we settled on is I will come back next meeting and ask for Stage 3 assuming that we got the test262 situation merged. And I will suggest to PHE that the test will be divided into smaller PR’s. + +### Speaker's Summary of Key Points + +* Immutable ArrayBuffer was presented for Stage 3. test262 tests have been written and submitted as a PR, but reviews and feedback about the tests are still pending, so Stage 3 was deferred. +* There was a discussion about facilitating the test262 review process by opening an issue with testing plans first and then submitting smaller pull requests to make reviews more manageable. + +### Conclusion + +The proposal will be brought for Stage 3 at a future meeting, once tests are landed and known coverage gaps are filled. For now, it remains at Stage 2.7. + +## Upsert for Stage 2.7 + +Presenter: Daniel Minor (DLM) + +* [proposal](https://github.com/tc39/proposal-upsert) +* [slides](https://docs.google.com/presentation/d/1Mfc7jl2Rbe8K8LCJWtjNZS94tQpzgvQIBfrq2e_iRcU/) + +DLM: So it has been a little bit since I have talked about upsert and this is presenting it for Stage 2.7 + +DLM: … using the map and you want to do an update but you are not sure if there is already a value associated with your key or not and what people do today is roughly along the lines of snip it example code and see if the map has the key and if it is there, you are going to do one thing, and if it is not there, then you are going to silence. + +DLM: The proposed solution is to add two methods to Map and WeakMap, one is `getOrInsert` which will take a key and value, and search for the key and the map, and if it is found it will find key associated and otherwise it will insert value in the map and return that. And there is a computed variant and this one takes a callback function. And will insert the—we discussed this last time and where we decided that we cannot prevent callback from modifying to map but we will insert the value and make the modifications that it made. + +DLM: Last time this was one outstanding issue which was called issue #60 and there was barely a discussion about that, and it is `getOrSet`, and there is `getOrInsert` and that you will do once and get or set can be done multiple types. So we resolved that issue with the idea of continue to use `getOrInsert` or `getOrInsertComputed`. + +DLM: And that was the last issue. So I would like for consensus for Stage 2.7 + +MF [on queue]: +1 support for 2.7 + +DMM [on queue]: +1 support for 2.7. EOM. + +USA: Anyone oppose? + +USA: Congratulations, you have Stage 2.7. + +DLM: Thank you very much. And I would like to thank everyone that helped out, especially my Stage 2.7 reviewers. + +USA: You are pretty swift. That was less than five minutes. + +### Speaker's Summary of Key Points + +The last remaining item prior to Stage 2.7 was the issue about what to call the two new methods on Map and WeakMap, which has been resolved to use `getOrInsert` and `getOrInsertComputed`. Consensus was asked for and reached for Stage 2.7. + +### Conclusion + +The upsert proposal advanced to Stage 2.7. + +## Withdrawing Records & Tuples + +Presenter: Ashley Claymore (ACE) + +* [proposal](https://github.com/tc39/proposal-record-tuple) +* [slides](https://docs.google.com/presentation/d/1afxyqJthBWsOpBvmPFP-VOhT8KyVF_AQlXLj0nkY6v4/) + +ACE: All right, so, as I mentioned earlier, the Records & Tuples proposal has this new reimagining, 'Composites' which is now Stage 1. So while composites are looking at a similar design space, the real core of the proposal was adding these new primitives, which is fundamentally different. And this slide is a nice quick little montage of the previous times we spent talking about Records & Tuples, and there were a lot of other talks outside of plenary too. But it is clear at least to everyone in plenary—and there is a decent amount of the community outside in the ecosystem too that Records & Tuple is not going to be progressing any time soon. Adding these new primitives did not find a way to move forwards. And we have composites as a new way of looking at this problem space. So, I am proposing that we withdraw the Records & Tuples proposal. + +NRO [on queue]: RIP R&T, you'll be missed. I support withdrawing. EOM + +MM: Just taking the opportunity, I think composite is ready for Stage 2. The question is if anything outstanding? + +JDH: There is no spec. + +MM: Oh. + +JDH: We cannot go for Stage 2 today even if they asked for it. + +MM: Oh I did not notice that, okay thank you. + +USA: Okay, I suppose that is not on the table then? + +EAO: I like the composite approach. I don’t really like “composite” as a name for it. It does not really feel like it means anything, and in my head it is hard to to remember if it’s “composite” or “composable” or something similar like that. And since we have cleared out the “record” space, that could be one direction in which to go here. But I would like to note that there is also another direction available here, of leaning into the use case as presented for composite keys, that could be better than “composite”, and to use “key” as the term here. The decision here ought to clarify if we are going in more as “this is the thing you use as a key” or “this is the thing you use as a generic immutable thing”. And being in this middle ground and using a weird word for it I think is awkward and we need to pick one of these directions. And the primary way of doing this is by bikeshedding on what the name of this thing is. + +ACE: Yes definitely, I can't remember exactly where the name composite came from. I think it reemerged when we were in Seattle. Maybe from DE. The proposal can end up with a different name, and only name I think we should not do is “Record" because I think it just has too much precedent in TypeScript and we cannot ignore that fact about the ecosystem. So I would not be keen on the word "Record” but I would be keen to chat about other names. And just because it is called "proposal composites", the API name does not need to be composite. + +EAO: Do you have any initial thoughts on “key” as the name here? + +ACE: I think 'key' iis part of that conversation of which way we want to push this. Do we push it where it firmly sits where you use these things as keys, or use these things as part of your data model? If we really want to push people in the key direction, yes then, calling them keys would be the way to do that. But first we need to decide which direction we want to push it in. + +EAO: Haven’t we kind of done that by agreeing to the use cases and needs that you have presented here, which are quite explicitly about doing this as a composite key, sort of a thing? And maybe this goes a little meta, but if we go for something way more generic like records and tuples, that should be something we explicitly agree on because that would be effectively changing the use cases that the proposal is aiming for. + +ACE: Yeah, as I said the thing is still something we should discuss. I think that most of the value we get at the beginning is a proposal that is focused on the composite key case. But I really want us to think long-term as well. So that the committee that sits around the future version of a call aren’t annoyed at the decisions that we make now in doing things. And I want to really make sure that we are thinking about some what-if scenarios. Of course we cannot predict the future unfortunately and we can’t over invest in coming up with something perfect that can fit all possible futures, it's not possible. But I want us to take a moment to pause and think a little bit about it, and not get too focused on just this one case and then end up missing something that we end up regretting. I think there is a conversation to be had there. And the thing I said earlier, I think it would be a shame if people have to keep converting their data into these things, and one way of avoiding that is if you can use these things more generically. But I can see why some people think that we should not go in that direction. + +USA: So yeah, congratulations, so to say, to ACE on consensus, however bittersweet this might be, and we look forward to composites. + +### Speaker's Summary of Key Points + +Following on from the Composites proposal achieving stage one and the Records and Tuples proposal not managing to gain further consensus for adding new primitives it was proposed that the Records and Tuples proposal be withdrawn. + +### Conclusion + +The Records and Tuples proposal has been withdrawn diff --git a/meetings/2025-04/april-15.md b/meetings/2025-04/april-15.md new file mode 100644 index 00000000..e4c6cab2 --- /dev/null +++ b/meetings/2025-04/april-15.md @@ -0,0 +1,974 @@ +# 107th TC39 Meeting + +Day Two—15 April 2025 + +## Attendees + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Daniel Ehrenberg | DE | Bloomberg | +| Samina Husain | SHN | Ecma International | +| Josh Goldberg | JKG | Invited Expert | +| Daniel Minor | DLM | Mozilla | +| Chris de Almeida | CDA | IBM | +| Jesse Alama | JMN | Igalia | +| Michael Saboff | MLS | Apple | +| Aki Rose Braun | AKI | Ecma International | +| Dmitry Makhnev | DJM | JetBrains | +| Bradford C. Smith | BSH | Google | +| Ron Buckton | RBN | Microsoft | +| Eemeli Aro | EAO | Mozilla | +| J. S. Choi | JSC | Invited Expert | +| Istvan Sebestyen | IS | Ecma International | +| Ben Lickly | BLY | Google | +| Philip Chimento | PFC | Igalia | +| Richard Gibson | RGN | Agoric | +| Jonathan Kuperman | JKP | Bloomberg | +| Mark Miller | MM | Agoric | +| Gus Caplan | GCL | Deno Land Inc | +| Zbigneiw Tenerowicz | ZBV | Consensys | +| Mikhail Barash | MBH | Univ. of Bergen | +| Ruben Bridgewater | | Invited Expert | +| Ashley Claymore | ACE | Bloomberg | +| Luca Forstner | LFR | Sentry.io | +| Ulises Gascon | UGN | Open JS | +| Matthew Gaudet | MAG | Mozilla | +| Kevin Gibbons | KG | F5 | +| Josh Goldberg | JKG | Invited Expert | +| Shu-yu Guo | SYG | Google | +| Jordan Harband | JHD | HeroDevs | +| John Hax | JHX | Invited Expert | +| Stephen Hicks | | Google | +| Peter Hoddie | PHE | Moddable Inc | +| Mathieu Hofman | MAH | Agoric | +| Peter Klecha | PKA | Bloomberg | +| Tom Kopp | TKP | Zalari GmbH | +| Kris Kowal | KKL | Agoric | +| Veniamin Krol | | JetBrains | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Erik Marks | REK | Consensys | +| Chip Morningstar | CM | Consensys | +| Justin Ridgewell | JRL | Google | +| Daniel Rosenwasser | DRR | Microsoft | +| Ujjwal Sharma | USA | Igalia | +| Jacob Smith | JSH | Open JS | +| Jack Works | JWK | Sujitech | +| Chengzhong Wu | CZW | Bloomberg | +| Andreas Woess | AWO | Oracle | +| Romulo Cintra | RCA | Igalia | + +## Don't Remember Panicking Stage 1 Update + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-oom-fails-fast/tree/master) +* [slides](https://github.com/tc39/proposal-oom-fails-fast/blob/master/panic-talks/dont-remember-panicking.pdf) + +MM: So last time we brought this to the committee, it was called must fail fast, and it got stage 1, and then it got blocked from advancing from there for reasons I will explain. And this is a Stage 1 update. Since then, the proposal—we renamed the proposal “Don’t Remember Panicking.” So I’m going to linger on this slide for a little bit because this is the code example I’m going to use throughout the entire talk, so it’s worth all of us getting oriented in this code example. This is a simple money system. Don’t worry if you stop bugs, it’s purposefully a little bit buggy, which is—in order to illustrate some of the points. + +MM: At the top here, this is just a validity check or it’s a Nat that takes a number and this number is a natural number, i.e. non-negative, and otherwise it returns it. Instances of class Purse each represent a holder of money such that money can be moved from one Purse to another. This money system has exchange rate conversion built into it, so for each Purse, there’s a number of units of some currency, which is the variable field, and then there’s a unit value, which is how much is each unit of that currency worth in some quantum unit of currency, some fine grain quantum unit of currency. And then we use that on construction to ensure that both of these are not negative. And all of the action, everything interesting is in the deposit method, because the deposit method is there to implement transactional totality. All the effects happen or none. The effects are that removing myDelta units into this purse, the destination purse, withdrawing the worth of those units from the source purse, and we’re trying to keep the total worth approximately conserved. + +MM: So this is implementing the transactional totality using the prepare-commit pattern, which I do recommend, and the prepare commit pattern has a prepare phase that provides the “none” of the “all or none”, which is it’s doing all the input validation, all the precondition checking such that all possible throws or early returns happen here, and in particular, no effects happen here. So if you throw an early return, no damage has been done. And this particular prepare phrase checks that that is an instance with this dot sharp here, that this is an instance of the Purse class that, source is an instance of the Purse class, that myDelta, the units—the number of units we’re transferring into this Purse, the destination purse, is now negative, and this outer Nat is checking that `src` would not be overdrawn. So if any of those are reasons to bail out early, we bail out early, not doing any damage, and otherwise, we go past the commit point into the fragile phase. The fragile phase is a thing that implements the “all” of the all or none, which is that once you start into it, you’re performing effects, and the correctness of the system depends on all of these effects happening, that there is no bailout in the middle here after you started to perform some effects. + +MM: Okay. So in JavaScript, the JavaScript spec does not admit the possibility of out-of-memory or out-of-stack, but if, for example, the numbers we’re computing with here are BigInts and even a multiply allocates and depends how it is implemented, even with what numbers multiply, it might allocate, it also might need a new C stack frame, and any time there’s an allocation, there’s always the possibility that there’s no more memory to allocate or that you’re out of budget for the total stack space or total number of stack frames, and because this can happen anywhere and the possibility is not part of the acknowledged semantics of JavaScript, it’s just unreasonable for the programmer writing something like this to have to be—think to be defensive against errors like this. And if they happen, then in this case, if it happens, then the destination Purse, this Purse, it’s incremented, but the source Purse, the one that its units were not decremented because of this failure. + +MM: So you could think that, well, maybe the programmer should just be defensive in general in the fragile phase by putting it into a try/catch, which makes a lot of sense, except that in the programmer has no idea why this block might have failed, maybe it failed in the plus equal here rather than the multiply, if they don’t know why it failed and without an extraordinary amount of bookkeeping to consulted, know what to do to repair the damage. We have unknown corrupted state, and to proceed with execution with unknown corrupted state is to compute forward with corrupted state and continue to do damage, and since we don’t know what’s corrupted, we don’t know what further damage will happen. So this is not a tenable situation. + +MM: So what the first version of this proposal was exclusively about was out of memory or out of stack policies, and what we were advocating is that when such an exception—such a problem state happens, a default happens, that the agent exits immediately, and the agent immediately terminates because any further execution of JavaScript code after this point is just too dangerous. However, when we presented this, we ran into the objection that, from browser makers in particular, that browser makers currently do throw on the out-of-memory and they were unwilling to change that as the default policy, because there’s too much code out there that counts on being able to continue after such things happen. + +MM: And in slides coming up I’ll explain why that’s actually quite a sensible policy, especially given the view of JavaScript when the browsers arrived at that policy. So instead of proposing that it be a part of the JavaScript spec, that we immediately terminate, we’re instead proposing that the decision, the policy decision, is delegated to a host hook, and the host hook, we’re generalizing it from just out-of-memory and out of stack to a bunch of different faults where you—we provide the host with a fault type and an argument that can provide additional information per fault type. So in order to make sense of this, we need a taxonomy of fault types. Oh, wrong taxonomy. Those are earthquake faults. Let’s go to software faults. + +MM: After unrepairable corruption has happened, the most important part of our taxonomy is that it’s not possible to continue with both availability and integrity. You can’t compute forward from unrepairable corruption preserving both availability and integrity. So one possibility, one possible choice for the host fail stop, which is to sacrifice availability for integrity. That’s certainly what you want for some—you want transaction totality like money system, where user assets are at stake, but the—with the browsers expressed as their desire for the default policy is what we would instead call best efforts, which is to sacrifice integrity for availability, to remain responsive. + +MM: And so remember that the browsers arrived at this behavior during the ECMAScript 3 days or earlier, I arrived at ECMAScript 3, which and the entire language was what we are now calling sloppy mode, and even in a failed assignment, you just continue to pass the failed assignment silently, going on to the next instruction, and the reason for that was the engineering goal review of JavaScript and browser behavior at the time is the most important thing those two preserve availability of the page, to preserve that the page stays interacting, even if the price of that was to compute forward with corrupt state. + +MM: So the next part of our taxonomy is these four levels of severity of a software fault. So the first level of severity is that, and I’m—and now I’m being a little sloppy with the word “host”. I mean, “host or JavaScript engine, but not JavaScript code”. This is the code that implements what JavaScript code sees. It detects that its internal state is corrupted, for example, an internal assert is violated. I’m just guessing here that the browsers have something like an internal assert as well. I’ve only studied the internal faults for XS, and I’m guessing that the browsers, but if the internal state of the host or JavaScript engine is corrupt, then I’m assuming that we all agree that those cases do call for fail-stop. And in browser receive the blue tabs of death; for XS, which is meant primarily for devices, there’s the quite sensible policy of just rebooting the device, not computing forward with corrupted state, and they’ve found that to be a much more robust way to continue. And for XS, the out-of-memory, out-of-stack, are in this severity category, because the excess machine is not built to guard its own integrity against these conditions. + +MM: The browser engines, we’re guessing based on the objection previously raised, that they are built so that following out-of-memory and out-of-stack, that the JavaScript engine and host have not lost their internal consistency. That their internal invariants still hold, but they’re now in a position that they cannot continue executing JavaScript while upholding the semantics of the JavaScript spec. The error that they throw is outside of the JavaScript spec, and because it’s outside of the JavaScript spec, it’s outside of what the JavaScript programmer thought they could count on, and therefore, we should assume that the JavaScript code is now continuing to compute with its state corrupted, even though the C state, so to speak, is uncorrupted. + +MM: So at this point, so for this severity level, it makes sense for the host to decide between best effort and fail stop depending on whether integrity or availability is the overriding goal. But if a host chooses fail stop, we think that it should provide some API, we’re not proposing what the concrete API for this would be, but if the default policy is best efforts, we’re advocating that the host provide some API such that the JavaScript code can opt into fail stop to protect itself, such as for such banking examples, and in fact, we would propose that, whatever this API for opt into fail stop is, become standardized as part of JavaScript. + +MM: Okay. The next level of severity is the host is fine, the JavaScript—the host can proceed within the JavaScript spec, but something happened such that we can think of the JavaScript code itself as likely being in trouble, where some symptom that we know about that the host can react to based on the assumption that it indicates that the JavaScript code might be in trouble, so unhandled exception, unhandled rejection are the well-known ones, there’s also for XS has metering built in such that it can be out of time. The browsers, in order to cope with an infinite loop happening within JavaScript code, might time that out and then have some strategy for continuing execution if that times out. And then the next lower level of severity I will come back to. But let’s at this point, to motivate that remaining level of severity, let’s go back to our example. + +MM: So now that we’re familiar enough with this code as a whole, let’s just scroll further to where we see both the deposit method and some code for testing it. So there is an obscure bug in this code, or maybe not so obscure to some of you, but the nature of this bug is such that it can survive zillions of test cases like this. It might survive during development and review and test cases because it requires a weird data coincidence in order to reach the bug. So the result is that the data coincidence might happen first in deployment at a customer site, having survived development and testing. And the data coincidence is shown over here, where it happens if we’re providing it BigInts rather than numbers, and that there’s a zero here on the unit value of the source Purse, which is not something that might—that the developer might have thought to try, and a zero in the amount in the destination Purse of the amount we’re trying to increment the destination Purse by. If there’s a zero in both of these positions, then this divide operation will throw a range error, and if we continue computing past that throw, then because we never noticed the possibility of this bug during development, we now, again, have corrupted state in deployment in a user code, in a customer site, which really bad. And we might, again, try to do thing about this by put a try/catch around it, but, again, if we don’t know what the problem is, we don’t know what to do to repair the state without extraordinary amount of extra bookkeeping, so the most we can do is log it or try to do some kind of diagnostic that ultimately makes it back to the developers so that at least we know why Zalgo is laughing at that point. + +MM: Now, surprisingly, there actually is a way in JavaScript today for this code to defend its own integrity and, you know, to sacrifice availability to do—to preserve integrity, at least as far as the spec is concerned. Which is it can go into on an infinite loop at that point, and as far as the spec is concerned, that blocks all further execution in this agent. Zalgo never gets to observe the corrupted state, never gets to do damage because of continued computing with the corrupted state, and we’re safe. + +MM: But there’s two problems with this. + +MM: One is the price of the safety is very expensive, and it’s expensive for the customer since this happens first at a customer site, and the other one is that if the host already has a policy that it engages in some remedial action like a throw or boarding the current turn, if it times out, like we believe the browsers do, and then continues execution, then that leaves what the semantics of the JavaScript spec and continues commuting anyway with corrupted state, so what we’re proposing instead is that the JavaScript code have some way to say, somebody stop me! Which is this `Reflect.panic` operation, which is a new API that we are proposing, so that it can become a practice when engaging in the prepare commit pattern to do a try catch and then to stop the agent, to abort the agent, you know, as soon as possible, as soon as the assumption with the front of the block is that no early exits happen here, if an early exit does happen here, that’s enough of a symptom to say, okay, we violated our basic assumption of fragile block, we don’t know how to repair the damage, just terminate immediately. + +MM: Another thing to do with the `Reflect.panic` is that the JavaScript code itself can now just do an assert-like operation where the current asserts might throw an exception, throw an error, when the assert—when the assert condition is violated, a more severe JavaScript code might say that, well, in assert gets violated, I have no idea how to continue, so just panic at that point. + +MM: So that brings us to the remaining—the next lower integrity level, which is the JavaScript code notices some corruption through a failed JavaScript level assert or an exit for—from fragile block, calling the `reflect.panic` , which in turn calls the host fault handler with the fault type, user panic, provide—and then whatever arg is provided here becomes the extra data provided there. + +MM: Now, throughout this talk to this point, I’ve been saying repeatedly abort the agent, but there’s been a conversation on this WHATWG thread in the HTML repo going back to 2017 on three different bug threads, including the new bug thread as of a few days ago of what the actual unit of computation is that needs to be aborted when we need to abort this, what we’re calling in this talk the minimal abortable unit of computation, and what these are discussing is, do we have to abort an entire agent cluster? + +MM: So a way to visualize the dilemma is: agents, before the introduction of SharedArrayBuffers, the agent was indeed the minimal abortable unit of computation because objects within an agent were indeed synchronously coupled to each other and in general, computation from an agent is synchronously coupled to each other, so you would have to abort worth at least the agent, but back then, agents were only asynchronously coupled to other agent, so I could abort an agent and then other agents were in a position to react to the sudden absence of an agent they had been talking to. + +MM: With the introduction of SharedArrayBuffers, agents could be coupled, asynchronously coupled to other agents, so the agent cluster, which is what those HTML threads are about, is—what we’re in this talk calling the static agent cluster, which is all of the agents that might be synchronously coupled to each other because they might share a shared ArrayBuffer. So that is certainly a sound unit for jointly aborting, but it’s not really satisfying as the minimal unit, because it’s—there can be a tremendous number of agents within the agent cluster, and sacrificing all of them to preserve consistency seems unfortunate. + +MM: So fortunately, there is a smaller unit to abort, which is what we’re calling here the dynamic agent cluster, which is at the moment the fault happens, let’s say the fault happens within this agent, if the fault happened within the agent, then at the moment of the fault, the—first of all, the processing of the fault is clearly something that can be the on the slow path. Nobody cares how long it takes a kill a bunch of tabs. The—what we can do is if the fault happens in that agent, you can say, at that moment, what is the transitive closures of—closure of agents synchronously coupled to the agent in which the fault happens at that moment? And then tell the transitive closure of those agents, the ones in that agent cluster, and this one would be killed and this one would not be killed, even though it’s in the same static agent cluster. + +MM: So the assumption that we’re making in providing this host hook that can choose to abort this minimal abortable unit is that it is allowed for this new host hook to not return a control to JavaScript. However, the actual text of the spec says something that in fact JavaScript engines generally violate, which is that the host hook must return either with a normal completion or a throw completion. Instead, execution today, depending on what goes wrong, might core dump or produce some other kind of diagnostic snapshot, depending on the host, on Node.js `process.exit`, which granted is not actually a host hook, but just a host provided built-in, is a, you know, process is the host object, does not return control to JavaScript. And of course, the browser blue tabs of death. + +MM: So we want to acknowledge that by allowing, in particular, the host fault handler not to resume JavaScript execution. Hold on. But there’s actually another way to not resume other than simply death before confusion. It’s certain three case that you can’t resume by simply allowing computation to proceed forward, but one of the reasons why the browsers to the blue tab of death preserving the URL in the URL bar is to give the user the choice to just refresh the page, the host hook could conceivably just decide to have refreshed the page on its own, although I don’t recommend it. I think giving the choice to the user is more sensible. But in this case, the reason why refreshing the page, after repressuring by the user choice or browser choice makes sense, is you’re to falling back to a previous consistent state. You’reimmediately forgetting—you’re abandoning all of the corrupted data state. You’re abandoning it immediately, no further damage happens, and you’re falling back to a previous consistent state. XS in rebooting the device does exactly that. The previous consistent state is the state in RAM. + +MM: A friend of mine who is doing an extremely reliable operating system one day was talking to somebody who does software for pacemakers, which clearly need to be extremely reliable, and asked them, okay, how do you deal with various kinds of fault that might happen in your own, you know, pacemaker software? And he said, you know, the heart is an extremely fault tolerant device. It can miss a beat or two without much worry, so we just reboot the pacemaker, and that works. And by rebooting, we restart from exactly this previous consistent state that’s in one. But if it took ten beats to reboot the pacemaker, that would be a very different story. At that point, you might prefer best efforts. And then Agoric does a full transactional abort, which is between transactions, Agoric has stored enough snapshot and log information that we can completely abandon the corrupted state, the state of the aborted transaction, and restore from a previous consistent state and continue to compute forward from there. + +MM: So this kind of amnesia before confusion is a way to preserve both availability and integrity, so this policy of providing this kind of abort of this minimal abortable unit supports hosts that want to have some kind of fallback to a previous consistent state. + +MM: So this brings up—that brings us to the larger question of how one builds fault-tolerant systems, and what faults fault-tolerant systems are trying to survive. So there is the conventional dichotomy of building fault-tolerant systems out of Byzantine components and out of fault-fail stop components, and usually this is discussed in the context of hardware faults, with with the hardware faults are assumed to be non-replicated so that you can have multiple replicas running the system that—where the fault occurs in a minority of the replicas, so with Byzantine faults, the—it’s assumed not to fail stop. It’s assumed to continue forward computing with corrupted state, and, therefore, be unpredictable, and furthermore, more generally, Byzantine fault means that the individual, that piece of hardware may indeed be malicious, which is the assumption behind Byzantine fault tolerance and blockchain, and, you know, this is hard, but there’s zillions of systems now that to exactly there, and that copes with the supply chain risk where some of the hardware running the computation might indeed be malicious. + +MM: And then there’s just the more common hardware assumption that you can build the hardware to act in a fail-stop manner, and then all you need is simple redundancy and voting that—so that is represented by systems like the tandem non-stop, where the replica that loses the vote just drops out. Also various failover schemes are essentially in the same category. But there’s the more interesting category of failable applications, what if there’s a bug in the code or what if there’s a fault in the interaction of the user code, the application code, and the software that the application code is running on where all of the replicas are running the same software, in which case, any of those faults are replicated. And if they’re replicated, the hardware redundancy is of no help at all. You have to—we have to engage in other coping mechanisms. + +MM: So what Agoric has certainly been focused on mostly is replicated faults that are Byzantine faults, which are, for example, library supply chain risks, where a library itself is linked into your software might be malicious, and now we’re trying to—we can’t mask those errors, but we can reduce the severity of those faults with principle of least authority providing the library with no more ability to cause effects to do their job, which is often tiny compared to the—to the status quo of what’s provided to libraries today, object capabilities, defensive consistency, where individual components are programmed to maintain their own consistency in the face of malicious callers and compartmentalization, which is the what the compartments proposal was about. + +MM: But today, what we’re raising is this other category of application faults, which are—where the faults are not malicious, and we would like to make the faults into fail-stop faults so that we can do fault containment, and which is what Erlang philosophy is about, it uses the philosophy, fail-only programming meaning the process, which is very much like our agent, that on the—if something goes wrong, it terminates immediately, and it leaves it to other agents, including an agent serving as the supervisor of the agent that fails, but in general, other agents interacting with the failed agent, to react to the sudden absence of the failed agent. And that fits with the postmortem fails philosophy that we followed when we introduced weak references and postmortem finalization, which is as opposed to the Java finalize method, which is a method on the object being garbage collected, which is if you’re being torn down or if you’re confused, you’re the last one that the system should ask to cope with the consequences that you’re about to go away, because you’re confused, you’re the least capable of being able to cope exactly with your own corruption. Rather, we should just kill you immediately without consulting you, and then let other code elsewhere deal with your sudden absence, and that’s—so we’re doing that for garbage collection with postmortem finalization within an agent and we’re proposing with these panic and fault handling to be able to apply that to the agent as a whole with other agents reacting. + +MM: And that brings me to the end of this Stage 1 update, and now I will take questions after turning recording off. + +DE: Hi. Interesting presentation. So `Reflect.panic`, you make interesting arguments for it, but overall, it seems like a pretty strong capability to be giving everyone to make it extremely easy to halt the program. Given points that you’ve raised before about how people compose programs without thinking so much about, you know, giving those components less privilege, it makes me worry that uses in the software ecosystem could let libraries call `Reflect.panic` when that isn’t want users of those libraries expect or intend to enable that. What do you think? + +MM: I think that’s really the crucial question, and we went back and forth over this. And in fact, if you take a look at the Agoric software today, it’s all built on the opposite assumption. The Agoric operating system gives to—gives to the start compartment the compartment that’s able to hold privileges, five gives it a capability to terminate the agent immediately, and then we go through all the trouble of threading that exactly and only to that code that should be able to exercise that capability. And, yeah, that’s a lot of trouble, but we did it and it’s okay. And the thing that got us thinking the other way is that any code has the ability to go into an infinite loop anyway. So the infinite loop is the [INAUDIBLE] that says we can’t stop code from going into an infinite loop, so why are we treating the ability to—specifically to stop the agent with—or the dynamic agent cluster, stop the minimal abortable unit, to stop it indicating a fault. And the—and specifically indicating a fault. The Agoric operating system in providing capabilities actually provides two capabilities. There’s stop indicating that something was wrong, and there’s stop indicating a normal termination. And for the second one, stop indicating a normal termination, we still treat that as a protected capability. And we intend to keep treating that as a protected capability. Because exiting indicating that you’re done is not equivalent to an infinite loop. So this is basically just a sort of cheaper form of the infinite loop that also provides diagnostic information. + +MM: And then I’ve got a PR that I will link to from the proposal repo, that I should have—but I’ve got a PR on the Agoric software where I’ve unthreaded the panic thing and provided panic as an ambient thing, as ambient as `Reflect.panic`, and in our case, it’s importable for module right now, but if this proposal makes it to Stage 3, then wherever that API ends up, we’ll move it there. But it was very pleasant making it ambient because we’ve got a lot of these fragile blocks of prepare commit patterns. We’ve got over a dozen of them, I think, and the internal assert that says, if this assert fails, I have no idea what the problem is. Just kill me now. That seems—it was certainly very convenient. It got rid of a lot of lines of code, and because of the equivalents with infinite loops, it didn’t actually create any danger that we did not already have. + +DLM: I’ll just be quick since we have limited time. I also have some concerns about `Reflect.panic`. + +PFC: I think there’s going to be a popular understanding of what `Reflect.panic` is for, if it becomes available, that I think will be pretty harmful for the web. I could just see it happening that somebody ships an assert library that panics when an assertion fails—which, as you pointed out in your presentation, has real use cases, there are times when you would want to use such a thing. But, you know, the library’s going to be available and the narrative that people will take away from it is 'oh, that’s more secure.' I can just see that panicking assert library being used in all sort of situations where it’s not necessary, which I think would really degrade the experience for users of the web. + +MM: I agree with you. That’s a real danger. But the availability of the panic and the need to kill the minimal abortable unit, if certain asserts do fail, is an actual need. You know, the other thing that such an assert library could do today, besides the infinite loop, is if they know enough about the host to know what causes the host to panic, how to induce blue tabs of death, then they could do that. + +MM: But I agree with you that this is why we went back and forth over this issue. One possibility going forward, and I’ll go ahead and ask for the committee’s reaction to right now—one possibility going forward is that we separate the user panic into a separate follow-on proposal, that the rest of this proposal is explicitly making room for. And this proposal, everything else that you heard today, which is generalizing it there just out of memory to a fault handling taxonomy with the severity levels, and the host policy and the ability of JavaScript code to opt into fail stop if the host defaults to best efforts, I think that all of that holds up well in the face of this criticism, so that we could keep that with room for panic and then have panic itself be a follow-on proposal. What does the committee think on that? + +SYG: I don’t think `Reflect.panic` is a good idea. I think the UX is going to be different from an infinite loop. An infinite loop is your page hangs for a bit, things don’t work, and a thing pops up that says this page is not responsive, do you want to stop it? You are now, a `Reflect.panic` , the only realistic way I can imagine implementing it, if you want to kill the entire agent cluster, is an actual process crash, and there is no world where a browser vendor is going to ship an API that user code can call that makes it look like the browser’s render process crashed. There’s no world where that is going to happen. There are other ways to implement it, I suppose. But those are very invasive. We’re talking about a way to kind of communicate to all the running other threads, like workers and stuff like that, to basically stop at the next point, right? Like, if you don’t want to kill they right then there, have to communicate that says check this, interrupt and stop. I’m not sure if that’s what you want anyway, because, you know, if you have SharedArrayBuffers, you have shared memory, and you don’t kill if process right there and then whatever thread killed the bug, you don’t know how long the other workers are going to keep running until they receive that necessary message. Maybe that’s that you don’t want, and that’s a very invasive implementation technique that is likely to fly either. I don’t see how we can ever have a `Reflect.panic`. It comes down to we’re not going to ship something that makes user code make it look like there’s a bug in our product, right? That’s not something we’re going to ship. + +MM: Okay. Noted. Thank you. I understand the nature of the objection. + +MAH: Really quick, and I think this is reply not just to panic, but to others. We seem to focus a lot on what the behavior of the browser would be for the main thread. But this can also be called in any other agent, part of that cluster, for example, a worker, and where the main thread could survive and act as the supervisor, in which case the application itself can still programmatically handle this. + +SYG: Sorry, was that a question or a comment? + +MAH: Yeah, the question is there a world where `Reflect.panic`, for example, would be acceptable in an agent that is not the main thread? + +SYG: I have no idea what the question is. I’m sorry. + +MAH: Is it reasonable to imagine that `reflect.panic` would be—like— + +SYG: It doesn’t kill the agent cluster, it only kills the agent? + +MAH: `Reflect.panic` only kills the—would only kill the dynamic agent cluster that the agent they are sharing SharedArrayBuffers, so it’s entirely possible that the maybe thread would not be affected and only workers would be affected, if they’re not sharing a shared ArrayBuffer? + +SYG: I see. Yeah, it sounds possible on paper. It still seems highly unlikely to be implemented. + +DE: Yeah, you were suggesting we have a host hook, and we’ve also discussed however behavior from browsers is kind of complicated and varying. So how would you want that host hook to be defined in HTML? + +MM: So let’s start with that we just take what browsers are doing right now, which violates the JavaScript spec, and instead, with this proposal, explain what browsers are doing right now as a host policy expressed by the behavior of the host hook, as we all understand, there doesn’t have to be a piece of software which is the host hook. The host hook is an explanatory device for dividing responsibility between JavaScript and the host, essentially the different hosts can express different policies, different hosts in fact have different policies with regard to host handling. Let’s bring the possibility of those different policies into the language by attributing them to the behavior of the host hook. + +MM: Was my answer clear? + +DE: No, because I don’t think there’s, like, a common enough or well defined enough behavior. Like, I still don’t know what you would want to actually write. I guess you described a general approach. + +SYG: Yeah. Let me interrupt you there, Mark. We don’t have interop among the browsers for what happens for what kinds of out of memory. + +MM: Ah. + +SYG: There is no universal behavior and I think it is inaccurate to say, that we violate the spec because this is extra spec. It is just not a spec behavior. + +MM: Well, JavaScript code continues outside of the semantics of JavaScript that the spec promised the JavaScript programmer they could count on. + +DE: Well, sure. So the spec doesn’t say that there are any resource limitations, and by not being an infinite unbounded machine, it’s violating the spec. Is that what you mean, Mark? + +MM: That’s what I mean. Is that in the—the previous time I gave—I brought this to the committee, my first thought was, you know, mostly there are two kinds of languages in the world: languages that cannot be implemented correctly, and there are languages in which it’s impossible to write a correct program. JavaScript is a language that cannot be implemented but on a infinite memory machine. Java, because of the machine error is one that can be correctly implemented but impossible to write a correct program because of the virtual machine error might happen at any time. + +MM: So the—so for those particular hosts, so I mean now maybe I am using the term host, but since it’s not universal across browsers, each browser expresses, you know, its way of coping without a memory, we attribute it to that browse are’s behave and make it something that is acknowledged by the language as something that is according to the host’s choice. Just make it explicit so the JavaScript programmer knows it’s out-of-memory happens, we ask the host what to do. That doesn’t seem like a big ask to me. Let’s go on with the queue + +DLM: Sure. I will be quick. It’s not clear to me even with SharedArrayBuffer, to guarantee the computation continues on another worker in a corrupted state. Just from scheduling point of view. And other—I don’t expect an answer. But it sounds like this is a building block for transactions, so why not just consider bringing a transaction proposal? That’s it for me. Thanks. + +MM: So do you think a transaction proposal—You’re exactly right. This is a low-level proposal that facilities JavaScript code creating transactions and such things at a higher level. And there are, you know, many possible transactional semantics. If we did bring transactions directly to the committee, first of all, there are… I don’t see that the—I mean, the kinds of things you need to support genuine transactions, including falling back to a previous consistent state, I just don’t see engines being willing to implement that in general. Or hosting being willing to provide that in general. So I don’t see that that would be better able to advance. This is a much lower level mechanism that enables a much wider variety of coping strategies. + +DLM: Yeah. That’s fair. I agree, it doesn’t—the transactions seems like problematic like in a JavaScript future. But I wanted to see if you considered that, given there are some concerns about this—especially with `reflect.panic` . + +MM: Altogether, my sense is that this proposal without the user-level panic is still plausible, and that the user-level panic could be a follow-up proposal, because of these objections, it might not advance. Okay. Let’s go on. + +[out of timebox] + +### Summary + +See topic continuation for summary. + +### Conclusion + +No conclusion; we’ll discuss further in a continuation topic, including a temperature check on the viability of this proposal without the panic API. + +## Enums for Stage 1 + +Presenter: Ron Buckton (RBN) + +* [proposal](https://github.com/rbuckton/proposal-enum) +* [slides](https://1drv.ms/p/c/934f1675ed4c1638/EYypvengQohMlG52w1qseW8BCwCkSG0Y-2ip8Zq7pxoOFw?e=Aklyqu) + +RBN: Today I want to discuss enum declarations. I am Ron Buckton, I work on the TypeScript team. Enum declarations are essentially enumerated types. Provide a finite domain of constant values that are obvious to indicate choices, discriminants and bitwise flags. And a staple from C-style languages, VB .NET. C#, Java, PHP, Rust, Python, the list goes on and on. The reasons we are discussing this are several. One, the ecosystem is rife with enumerated values. ECMAScript is `typeof`--String based. The DOM has `Node.type`, which has its enumerated values on the Node constructor, this is the same. Buffer encodings are string based, or a string based enumerated type essentially. And Node.js has constants that are enumerated type or value-like functionality. But there’s no grouping. For users there is really no standardized mechanism to define enumerated type, ones that can be used reliably by static type. We talked about ObjectLiterals. But there’s a reason why that’s not really the best choice for this. I will go into that in a moment. + +RBN: Another reason to bring this up. Node.js shipped a feature that allows for type stripping in TS files to allow both user code and third party packages to potentially run just TS files within their program. And in those cases, type visit types are stripped off. However, enums are a TypeScript feature that are not an erasable type functionality. They have run-type behavior. So if Node.js developers wanted to use enum, they are forced to use something else that doesn’t work well with TypeScript because it’s not something that is supported by type stripping and we had developers as well as members of the Node.js committee reach out the team to consider bringing this proposal to ECMAScript to have some form of TypeScript enums potentially standardized. + +RBN: So I mentioned why we might not want to use an ObjectLiteral. Enum declaration has a number of advantages over a plain old ObjectLiteral. It has—the goal is to have a closed domain by default. So enum members would be non-configurable and non-writable. Enum declaration would be non-extensible and have a null prototype. Null prototype is to avoid collisions, and non-extensibility to avoid somebody making changes to an enum value later to get a runtime optimization. Another advantage of enum declaration is that it is restricted to a specific domain of values. Limited to a subset of primitives. Number and string are what is supported in TypeScript, we have also been considering BigInt and Boolean as well as symbol-based values. + +RBN: One other capability of enum declaration at least TypeScript enums that’s not able to do with ObjectLiterals, is self-reference during the definition. In ObjectLiterals you can’t reach out to other members of the ObjectLiteral while defining it, itself because it doesn’t exist yet. However, it’s fairly common within a declaration, one that works with numbers like bit flags and bitmasks, to use bitwise combinators to create bitmaps, by referencing numbers, within the definition of that enum. + +RBN: And again one of the other major advantages for enum declarations is that they are something that can be especially recognized by tooling such as a static type system like TypeScript, not only is this something in value space, in a JavaScript runtime value that can be accessed—but also a type that can be restricted in a static type system, used to discriminate in a union, provide documentation in hovers, et cetera. + +So other really interesting advantages of an actual enum declaration like ObjectLiteral we have things to extend enum declarations that a normal ObjectLiteral it wouldn’t make sense to do. One of the big areas we have investigated is something like introducing algebraic data type or ADT-style enums. Creating a structured object using a very concise syntax. These are often like option or result types. They are frequently used in languages like Rust. But also in even Python, it may not have an ADT enums, but it uses something like Option. + +RBN: Some other areas we want to investigate might be decorators, if you use ADT enums to specify data used as a wire format and translate to something that is more usable. If you are using like prototype, or doing a WASM interop, you want to say, I want these values to be stored in memory in this way, but serialize or deserialize them. So decorators is a way that could be accomplished. I will go into more in later slides. Another area is opt-in auto initializers. I will explain why that’s important later. + +RBN: And the potential for shared enums. They have further restrictions to be used with shared memory multithreading and shared structs. + +RBN: This is a Stage 1 proposal. However in many cases we have some leeway in what we are considering as far as both the syntax we’re supporting and the runtime semantics. Since one of the goals is to allow TypeScript developers to have some form of enum in native JavaScript code, there are some restrictions to what we’re looking for as far as syntax. In many cases, there are things that TypeScript looked at, and said there are behaviors of enum that are not desirable that we might be able to change or that we might be able to build on top of the same functionality, but built on top of a more restricted or MPV approach to a deliciousing the syntax we are proposing is—enum have identifier base name. They have a name, an identifier or a StringLiteral or initializer. StringLiterals are not as used as identifier names, but they are used, it is something that we don’t have a strong preference on and we may consider to drop the support because it has some complications that arise when it comes to do self-references. + +RBN: I will say as we start—continue a brief Github search, showing among public projects on Github, 250,000 cases of enum declarations across numerous projects. It’s a popular and heavily used feature. And that’s why again the goal is to be for the syntax to be compatible with TypeScript. If we can co-evolve the syntax, as long as we avoid conflicts. So one area that we are considering not supporting because of discussions I have had with various committee members over the years has been TypeScript default for auto-numbering, and we might have more specific or opt-in mechanisms for that. We will go into that more later as well. + +RBN: As far as the proposed runtime semantics, so enum declarations—currently we’re looking at producing an ordinary object with a null prototype. That’s not what we do in TypeScript today, because in general in TypeScript we tend to lean on the type system to tell you when you’re doing something wrong, when you are accessing something that isn’t like—for example, access valueOf, or—which is a inherited property from an object—that is not a member of the enum domain. We don’t let you use that. But JavaScript doesn’t have a type system, it’s likely more reliable to introduce some of these additional restrictions and semantics to avoid having to depend on a type system for that kind of behavior. + +RBN: Another thing to consider is enum declarations have a `symbol.iterator` method. That pairs the enum elements. The reason why—but also, some of the directions considering for ADT enums as a potential future capability, would have the potential to introduce new static members to an enum that aren’t necessarily part of the enum domain. We don’t believe `Object.entry` would be reliable long-term for something like that, and it’d be better to have a more specific capability for yielding key values like this. This Is a feature that is present in Python enums. You can loop over the enum members of a Python enum. + +RBN: Enum members are properties of the object for enum declaration and they are configurable: false or writable: false. This isn’t a behavior that TypeScript implements. So for actual native runtime enum, it would be reliable to make sure the members are fixed and unchangeable. + +RBN: Next is the enum members that have identifier or StringLiteral names. For one, we again are considering the potential for removing the ability to have StringLiteral names. There are very few cases where that occurs at runtime. It’s likely that enum values might have complex values that don’t really match a potential StringLiteral. We only see things like dashes in the StringLiteral names from the Github searches we have done. Generally we want to avoid things like NumericLiteral names or computed property names. TypeScript currently uses a reverse mapping for numeric enums that, while in and of itself is problematic, iit means that having NumericLiteral names increases the potential for collisions with those types of reverse mappings, which is why we don’t support them. But I could have one that strings is an integer, for example, but that’s again not something we really run into that often but it’s been a concern. The reverse mapping thing is something that we are rethinking due to limitations that it has. I will discuss that more in the later slide. + +RBN: As mentioned before, enum initial values are limited to String, Number, BigInt and Boolean and Symbol. We don’t allow all of these JavaScript types for various reasons. For example, function, we want to for bid as we consider a potential future of ADT enums, which are structured, those would be enum members that are constructor functions, and we want to avoid confusion when it comes to the actual enum domain for values that is a constructor function for ADT value, or is this just some function value that doesn’t make sense? So in general we try to limit it to a subset of primitive types that would be allowed in those places. + +RBN: And the other interesting semantics would be that enum initializers may refer to the enum by name, or prior enum members. The most important thing is it's used commonly for bit masks, or if I need to alias a name with a different name, or enum member to a different name so that I can do refactoring without breaking old code, it’s useful to reference those values. Referencing the enum declaration itself is useful for things like enum members that can be referenced because they are not an identifier or they are a reserved word. If you tried to create a enum member name `default`, whatever its value is, then member referenced that member as `default`, that doesn’t work because “default’ is a reserved word. There’s some cases where we would need to reference the enum declaration itself which is why we would want to support that. + +RBN: The desugaring is not final. We are considering is simple as a desugaring. If you can consider this, the ObjectLiteral approach, where you just define enum members A and B as 1 and 2 and freeze the result. That’s one possibility. But you cannot do something like enum member C, which references those in a bit mask. + +RBN: So instead, what we do is, define these one at a time. It’s helpful when you look to a future where support decorators, and need to handle that evaluation independently. + +RBN: One other way to consider doing this desugaring would be to predefine all of the properties with a value of undefined, but still configurable: true, and after evaluating each enum value, assign it, and at the end, mark those as or just freeze the object essentially. One to roughly emulate some of the behavior we want to emulate for the structs proposal, where the shape is fixed. The early layout is fixed, would allows for runtime optimization in engines, the type of things looking at the structs proposal, we might want to leverage here to avoid costly lookups. We know these objects are unchangeable, that the members themselves can’t change. Therefore, it’s potential that the engine can do inlining. We are not depending on the behavior, but some of these things are the advantages we are looking at with the structs proposal with having a fixed layout. + +RBN: Some other considerations, they’re not currently in the proposal, but we’re willing to consider these, and what the value might be right now. TypeScript don’t support enum expressions, and it’s fairly common in JavaScript for a declaration form to have an expression form. In something like the structs proposal, specifically for shared structs, it’s not possible to have an expression form for shared structs, with the type of correlation mechanisms we are considering. + +RBN: And since TypeScript doesn’t support this, we are not that strongly motivated to add support for it. Enums are generally evaluate—one-type operations. Finding constants that are application-wide. Or at the very least, are frequently used within a single file. Therefore, enum expressions aren’t that important. And if you do need them, you could still define enum declaration and return or ache is its enum object as the expression. But if there’s support motivation, we could consider rolling that into the proposal. + +RBN: TypeScript does support export for enums, it doesn’t support `export default`. We consider adding support for export default like it is for class today. + +RBN: Shared structs. Primitives like number and string can be passed to a shared structs. However, we have had discussions in the past about what should be the—if there was a default behavior for enum members that don’t have initializers, what should the value be? TypeScript’s support for enums currently is to use numeric enums, and does auto-incrementing of the enum values for the simplest approach to uniqueness. We had discussions potentially about whether we used Symbol. Symbol is not reliable when working with shared structs, as evaluating the declaration twice, once in the main thread and once in a worker thread, would result in different values for those enum members, when the Symbol is evaluated. So you would have to use things like `Symbol.for` or some mechanism. We generally discourage using symbol as enum values. But we are not opposed to having symbol values themselves. + +RBN: There’s some differences to the proposal from what we have today in TypeScript. These differences are accepted, and we’ve been discussing it with the team, and we’re even willing to consider further differences. And eventually, adapt TypeScript to support that and make changes in deprecate certain functionality if necessary. But one thing that is very heavily used in Typescript are auto-initializers. TypeScript, if you supply you enum members that don’t have an initializer, we choose a number. And those values then are auto incremented. We do this because that’s generally the case in practice that is used by every language that does enums. With few exceptions, C# enums produce a value that does something like this auto numbering capability, even though the enum type information is stored along with the value. So if you—at least when you box an enum value, the enum type information is stored within it, so it’s more complex than just numbers. But it’s generally the practice that numbers are used in these cases, and it’s fairly common for applications to use auto numbering—I shouldn’t say common. It’s common for users to not want to put values for the initializers because those values obviously aren't a consequence when you are writing high-performance code, like a compiler, like in TypeScript. Having numeric values is extremely useful to be able to write high performance conditions to filter out certain values. So the approaches discussed, if we don’t have numeric auto initializers, using string or symbol based, Manning (?) through function to get the number, but that—that is not efficient or performant. It’s likely that a native implementation might not support auto initializing, as some delegates expressed concern that it's a footgun they don’t want to make easy to reach for. TypeScript would likely continue to support auto initializers on enums written in TypeScript and would down-level them to a native enum that has explicit initializers. + +RBN: Another TypeScript feature considering deeply indicating for the 6.0 release later this year would be to prevent declaration merging. TypeScript lets you declare the same enum twice, and the new members are merged into the old declaration. This is not a desirable feature. And it’s something we are considering deprecating. We have looked at our almost top 1,000 projects in TypeScript projects that have sources available on Github, and we have run into I think one major case and it was the declaration file that was the result of a bug in how the declaration was produced. So really this is not a practice that is commonly used in applications today. So it’s not really a concern. + +RBN: So another thing that TypeScript has that we are considering deprecating is reverse mapping. However, reverse mapping is actually very valuable. Reverse mapping is when you map a enum number to a numeric value, that you can then have a mapping back from that numeric value to the enum member. This is used for debugging, diagnostics, serialization and formatting. It’s unreliable because it works for number-based enums. In other cases, that reverse mapping could produce a collision, so we don’t support it in those cases. Since it’s unreliable, we are looking like `Symbol.iterator` to produce the entire domain of the enum. And you can use your own functions to then filter through that to find the reverse mapping and do filtering, formatting, for diagnostics and the like. And we have considered, and an early version of this proposal had a global enum object that would produce—use or could have used this—this data to provide simple mechanisms to get this information. We have removed that from the proposal to have a more MVP approach. So it’s something we can consider, but not considering at the moment. + +RBN: Another major difference from TypeScript enum, TypeScript has `const enum`. Something we added to do inlining of enum values using whole program optimization and program knowledge. We don’t intend to bring that to TC39 as it’s not really necessary. Any type of optimization that could be done with `const enum` could be done by the runtimes themselves. `const enum` is more of a mechanism TypeScript would use to do this type of inlining for performance reasons, but a caveat is that you change the dependency without rebuilding, then the values are not updated because they are inlined at compile-time. If engines are able to, or have the interest in optimizing enum members in a way that can do this inlining, such a feature would not be necessary. + +RBN: And as mentioned, TypeScript doesn’t support Symbol, BigInt and Boolean values. Symbols have been discussed off-line with a number of delegates as a potential option, Boolean is something that has been discussed within the TypeScript team. BigInt is one that is useful because it’s very—it has been the case where working with bitmasks, you can run out of space in a 32-bit integer. BigInt is an option, but it has pretty poor performance because of how it’s implemented. Plus it’s a variable length integer, you are not fixed to something like a 64-bit int. So these are areas where we’re considering potentially adding this support, but we are again considering what the specific motivations are if these are things we want to bring to TypeScript or enums/ + +RBN: There’s areas for future enhancement. Opt-in for auto initializers. ADT enums, how it interacts with pattern matching and decorators. + +RBN: Opt-in auto-initializers, It’s been argued as I’ve been talking about this proposal with various delegates over the years, that such behavior could be a footgun, by default. It can cause issues with package versioning. This is like TypeScript enums, if someone wants to inline the value for performance reasons, because they knew what the value was in version 1 of a package, and upgrade to version 2, it might not no longer match. If they introduce it in the middle of existing members. + +RBN: However, even with that footgun, this is still a highly desirable feature for users. A large number of the enums I looked at in public enums on Github were using this auto-initializer capability. One way we have considered to bring that capability back would be to, instead of having an implicit behavior, to have an explicit syntax. This is a function or an object with a built in symbol that could map things from—based on the current position and information. + +RBN: So enum of number would auto number with values of 1, 2 and 3 and continues, enum of string would auto initialize each of these to the name of the enum member itself. The other way we might consider doing this would be an `auto` keyword as an opt-in. It's a fairly small opt-in, or small hurdle to get over to get back to the TypeScript style enum approach. But in that case it’s numbers only, since if you are trying to add more complexity to specify what type of auto-initializer you get, you should consider things like number of, of string, et cetera. + +RBN: One of the reasons we don’t have this in the core proposal is there’s been some discussion about things like the of clause. If it is dynamic, then it can’t really be optimized by a native implementation. If it is to be optimized by native implementation and not dynamic, you don’t need the symbol for it, but it still runs into things like aliases and shadowing, where something could be declared number and it’s not really—not still necessarily statically analyzable. + +RBN: But again these are—a lot of these things are capabilities we might consider for Stage 1 version of the proposal, as we continue to discuss what feature we want to see and advance to Stage 2. + +RBN: Another big area of interest, especially amongst a number of JavaScript developers that I have talked to in various forums, especially Twitter, has been things like ADT enums, the ability to specify something like an Option type. So enums of Option with value and None with no value. A result type of okay with a value or success with the value and error error with a reason. This is something that came up recently with off-line discussion about a possible try expression. And then to but with the result be? (?) If we had something like ADT enums, we want to use something like a result type is a way to define those values. + +RBN: But in addition, these types of behaviors would fit in nicely with extractors and pattern matching. This example in things like the extractors proposal in slides before, where you could match on the enum members and extract values using extractors. So it’s definitely an area we have been investigating, on the extractors and pattern matching for a while. + +RBN: One of the reasons ADT enums aren't part of the proposal, we want them to interact with structs and shared structs. There’s dependency and tie-ins to consider and we want to pursue this as a follow on proposal to this proposal. + +RBN: As mentioned, there’s some strong ties in to future pattern matching, but tied in things like normal enums, or normal enum members as well. Where we might specify a `Symbol.customMatcher` as part of enum declaration itself that would lo allow us to use a is or whatever we might look at as in fix to say, is this part of the domain of `Color`? It’s only 0, 1 or 2. There are some additional advancements of pattern matching as we consider that approach + +RBN: Decorators, I don’t want to spend too much time on this. Once we get decorators in the ecosystem, beyond their use in TypeScript code or Babel, then there’s other avenues to consider in the future for using decorators and other declarations. For example, having decorators on enum members, and having the distinction is this an enum or a field, so it knows what metadata to look at. For example, things like control serialization/deserialization, formatting, marshalling, etcetera. When defining the declarations. This is a featured used in C# when doing JSON serialization as well as serializing to other wire formats. It’s a fairly commonly used feature in those cases. But it’s not a critical path capability we are looking for now. And again this is something we might consider once we see implementations of decorators in runtimes, and have some time to see, like, what is the user taking up with the capability? + +RBN: So in conclusion, again one of the main things we are looking at here is to eventually standardize some form of TypeScript enum, so that developers using their JS with type stripping could use enums today. To have some of those advantages in the enum syntax or something like an ObjectLiteral. And to have some of the flexibility we have going forward for introducing ADT enums. + +RBN: We are looking today is the potential stage advancement for Stage 1 to investigate enumerated types to determine which of these semantics we might want to adopt. What we want to consider. If this still feels like a good fit for ECMAScript. And what type of direction we need to go and the type of changes we might need to make to TypeScript to make these things possible. + +MM: Could you go back to the slide with the desugaring. I understand you are not committed to this desugaring. In this desugaring, there is several things that I think are problematic. One is that if, for example, you initialized—if you swapped the line where you’re initializing B and initializing C, just swap the order of them, then C would be doing a property get on E.B, and with this desugaring, that would not produce a TDZ error. That would produce undefined. And then `A | undefined`, whatever that evaluates to, that becomes the value that C gets initialized to. Along the same lines, if, for example, the right-hand side of that was calling a function, providing E as a value, `F(E).B`, then it would be making E available to other code while it is in an initialized state. These are both the same problem. Which is the visibility of E before it is fully initialized. + +MM: Now, with all that said, could you now bring up issue #25? Which some of you have already looked at. + +RBN: I am not currently set up to bring that up, because I am sharing just the presentation. + +MM: Okay. + +RBN: But I would like to talk with you more about this off-line on this issue. We have been discussing—one of the main reasons you want to support the—we currently support in TypeScript referencing the enum declaration by name is so that you can reference values that aren’t identifiers. If you have a StringLiteral enum member that is something like—hold on a second. You can have been again looking at it, one of the examples I found on a simple code search… I was looking at a code search of enums with dashes in the name. Mostly because the enum was referencing the same thing. You can’t reference those by as identifiers. One thing we considered in TypeScript is deprecating support for StringLiteral names, but there are still reserved words that couldn’t be used on the right-hand side. So if you said, “return” or “default”, lower case, in your enum number name, you couldn’t reference it in the enum value. That’s why we were considering it. This is something we want to look into more as we go + +MM: You agree that the desugaring you are showing doesn’t deal with enum names that are not available names as well. + +RBN: Yes. The desugaring doesn’t handle that case and you could call a function passing in the enum declaration. One way we could address this which, at least in part, is that we could predeclare the shape of the enum so it has a fixed shape, even if the members themselves are not marked as configurable: false. As you initialize them, then we mark them as configurable: false. So there’s potential something—you can like will to get the error during declaration time as opposed to getting an error later on when using the enum. So these are things to consider. Also, we have been talking about the same thing with the structs proposal around being able to pass a shared struct to something that might be uninitialized, as discussing how if it’s—if there’s a possibility of having read-only fields inside a shared struct. We will continue to discuss. + +MM: Because all of the issues I am raising, you are acknowledging are open and to be revisited, altogether, let me say, I like this proposal, I like this direction. In particular, I appreciate that you are not trying to reproduce exactly the existing TypeScript semantics, that you are willing to propose here a more principled semantics, or a better-behaved semantics. And that for TypeScript code that stays within the semantics that works in both ways, that works with this proposal, and works with existing TypeScript, that that code would also be in erasable TypeScript, once given that this proposal is accepted into JavaScript. And I am very excited about erasable Typescript. And enums were the biggest hole in the valuable things that had to be removed from TypeScript and that would erase that. I will leave this there. + +SYG: So RBN, could you walk me through how you would adopt new enums, if they don’t have the exact semantics as the TS enums of today in the erasable mode? + +RBN: In most cases, where the syntax—the syntax or semantics differ from TypeScript we are considering making changes to TypeScript. The one case we are not is in how auto numbering works. That is too much of a breaking change. Since TypeScript has this mode called erasableSyntaxOnly, designed to only allow the syntax in your TypeScript code that is also allowed with type stripping in Node.js. If we were to support native enums, because they are now standardized and then available in node.js applications, then we would instead only restrict the parts of enums declaration that are still TypeScript only, which would be the default auto-initialization behavior. If we introduce an opt-in auto auto-initialization capability, we would guide, including an error, a quick fix that allows you to do that. We have a story going forward how to adopt the these changes, some of the other changes to runtime semantics, such as enum declaration merging, or things we were already considering deprecating for TypeScript 6.0 later this year, and also looking into other potential changes we might want the semantics to later forbid. Then that gives people time to transition to new things like not using the old reverse mapping approach to using `Symbol.iterator` if we decide to advance. We have leeway there. To make some of the changes. The number of people that use things like reverse mapping is relatively small, but an important capability in those use cases. But in general, like we are going to match syntax and semantics as much as possible and preserve auto numbering + +SYG: Two follow-up questions. One, is it then a fair characterization to say there is a constraint that if we standardize a piece of syntax that exactly is the same as a piece of TypeScript syntax, that the semantics we standardize must also be exactly the same as what TS currently exposes? Like, is is the auto numbering versus non-auto numbering semantics currently syntactically distinguished + +RBN: It is not by not—in TypeScript by not having an initializer on the enum number. We that’s how it’s been for a decade. + +SYG: Does that mean then if we standardize—just as hypothetical, no value judgment here—if we standardize a JS enum without auto numbering, that must be syntactically distinguished from the default auto numbering enum that TS has today? + +RBN: If you are saying that it has no auto initialization at all, then no. Because you can write a—you can write a native enum with no auto initialization in TypeScript today by putting an initializer. So that is the syntactic distinction there. If the concern was we chose to the auto-initialization using a different default, which was discussed several years ago, as like say symbol was the default for those, that is something we would have to say as something we explicitly don’t support, since we want to maintain backward compatibility with TypeScript. There are a lot of declaration files that exist across numerous applications and right now if a declaration file is handwritten and uses enum, we have an assumption of what the results are. We wouldn’t want to break that. Which is why we instead say, it would be better if we wanted to change the auto initialization behavior is that doesn’t match TypeScript, do it through an opt-in approach that is syntactically distinct such as the—let me jump to… to something like having auto or of and of clause to be syntactically distinct to the auto initialization behavior. That’s the approach we would recommend. + +SYG: So then the adoption story is, if we need to make changes, we will add new syntactic ways to distinguish the changes, but that will obviously require the TypeScript libraries and apps, preventing them from being erased to also update their code. That’s an accurate assessment, right? + +RBN: That would be an accurate assessment, yes. Isn’t a concern when it comes to referencing an enum that someone else is publishing as part of the declarations. We assume that’s a property access on an import. The only case where it differs is a `const enum`, and that doesn’t make sense in plain JavaScript. But, yes, if we want to do something that differs, then we would want to use a syntactic mechanism and then developers would have to adopt that mechanism to support it. Yes. + +SYG: Okay. Thanks + +PFC: Hi. I think this is a great proposal and I would like to see it advance. Did you have any thoughts on whether enum declarations should be able to be decorated? Not the individual members, but the whole declaration? + +RBN: Yes. Yes. They would be. That’s not clear. I don’t show an example of a decorator on the declaration here, but in the second bullet point, I say we would want to distinguish between kind enum and kind class and decorating itself. Yes, that’s something to consider. Again, if most likely if decorators are a feature we add to enums, it would be a follow proposal that comes after decorators reach Stage 4, and we consider some of the other decorator proposals. And there’s hesitation to advance other proposals, until the log jam around decorators support and runtimes is addressed, and get some feedback on implementations in the wild and people starting to use it. It’s likely the decorators wouldn’t come to enums and depend on decorators existing, but definitely support it. Yes. + +PFC: Yes. Sorry. I missed the second bullet point here. Thanks. + +DE: Hi. So I am wondering about the pros and cons of this feature versus the purely TypeScript-level feature, where we focus on making sure that you can have the ObjectLiteral kind of like as it comes with a types declared. You mentioned a few advantages such as being able to do self-references, the way it could be frozen afterwards, the way it could be a host for other possible features. And I guess there’s obviously the aesthetics of sticking with what developers have found to be intuitive. So yeah. Could you speak to this? + +RBN: Yeah. So the approach that’s outlined in the pull request you mention is the `as enum` + +DE: Yeah. Now it’s called … (?) + +RBN: So for all the reasons I have listed on this slide, generally we were less than enthusiastic of ObjectLiteral enums on the TypeScript team. There’s a number of limitations that make it not usable for a lot of cases for enums today. It doesn’t give us that avenue for advancing potential future dresses liar ADT enums which is a very popular capability that I’ve been discussing with a number of folks. It doesn’t give us that able to to—unless you are using a static type system, it doesn’t give you that ablility to to do self-referencing, which is extremely useful and necessary for anything that works around bit flags and bitmasks—which, if Node.js were to adopt this for specifying flags for the file nodes for open, those are all bitmasks. You’d want something that works in those cases, easy to define. And you really want to—ObjectLiterals don’t give you that capability. ObjectLiterals would need additional work after the fact, like freezing the object, it wouldn’t give you the opportunity to potential things like inlining and runtime capabilities where runtime might be able to look at the object shape and if all the things—if it knows this variable can’t change, because it’s an import, and it knows this property member can’t change because it’s frozen, you might be able to inline in native code. We are not depending on, but we want to see in the future. `const enums` have shown they can be significantly faster when you can do that inlining. And there are some performance enhancements that runtimes are looking at as we have been looking at the structs the proposal, and we won’t get that with ObjectLiterals, I don’t think. + +DE: So the restricted domain of values part is what I am having trouble understanding. Why is that a benefit? + +RBN: So I mentioned before that, if we eventually do want to eventually support things like ADTs, which I think we may really want to investigate—I think they’re an alternative to the things like records and tuples, they slot in well with things like the extractor, and pattern matching and the like, as well as other proposals that we have been considering or have been discussing for a little while now. Having those Option, Result types and the stronger capabilities mean that if we don’t limit it now, then if tooling is built up, that says, I am going to do something with enum members and I know—and the enum value could be anything. Then it makes it really hard to say, now we support ADT enums. Is this an ADT enum member because it has type of function or some other function? It makes it harder for runtime tools to make those types of distinctions because there’s no way to indicate. Just like other than looking at toString, there’s no way to indicate the difference between a function and a class, other than trying to construct it. This is simplifying this from earlier versions, but limiting the surface area of enums to gives flexibility for advancing in the future, and the more surface area we leave open, the more we paint ourselves into a corner with other capabilities later. + +DE: My other question was about the transition from TypeScript’s current enums to this. What we saw with the set -> define transition was the fact there wasn’t a syntactic difference. But there was a semantic difference. It made it pretty difficult, because locally, you couldn’t switch between the two. It had to be a global flag. Are we falling into the same issue here? + +RBN: I don’t think we will be. One—so there are two problems that came up when it came to set versus define. How we did the downlevelling to be compliant. Because there are certain things like if you tried to override a method with a field, how that works versus how set semantics works. It had to to deal with inheritance. And the other one was around producing those types of error messages, how do you know it’s doing the right thing. We have to have ways of knowing that you are trying to override a getter-setter and we did introduce syntactic differences to help with that. Which was, previously when we had declaration files we would emit a get/set as a field in the declaration file. Whereas we introduce ambient getters and setters to distinguish them and produce results. + +RBN: The difference there also, we needed the flag to control emit behavior. Because people had a dependency on how these fields were declared. It’s still an issue today. Because people are using, like, legacy decorators that have expectations on how fields are handled. You could put access modifier on a constructor parameter and it becomes a field of the object and avoid having the boilerplate of doing the assignment and adding the field declaration. All those things ran into issues when it comes to sets versus define semantics. None are issues with enums. People don’t look at things whether an enum is configurable in TypeScript because we don’t allow you to overwrite them when using the type system. When it comes to the break between when we support this and when we don’t, right now if you wanted to use enums with erasable syntax only, it fails. It will continue to fail until there is a version of TypeScript that has support for emitting native enums, which won’t be until the proposal has advanced to Stage 2.7 or 3. In which case, we would be using a version of the compiler that knows how to emit that. In general, when referencing an enum from another file, you care about that it’s a `identifier.propertyName` . There’s no different semantics particulars to emit for access. It’s always going to be self-contained within your own code. No subclassing or concerns there. So I think all of those concerns we had for set versus define are not relevant to enums. Any of those behaviors that need to change within TypeScript for runtime semantics are things we have a deprecation plan for. We release changes over iterates so folks have time to make the changes. If they say, I need a way to emit old enums, we would provide a flag giving them the capability and modifying their code and what it emits because consumers use property `name.member` name to access those enum members. We don’t think there’s going to be a concern there. + +DE: So I didn’t quite get it. Are you saying nobody uses enum introspection from the outside of the module where the enum is defined? + +RBN: The only times we seen that happen was really around reverse mapping. And reverse mapping again is not fully reliable because it only works for number enums and not StringLiteral enums. We are willing to change that behaviour. And give some time for developers to address that. + +RBN: And that’s again why we introduce things like the `Symbol.iterator` as a way to give you the guaranteed set of the domain of the enum, regardless to static methods we use or capabilities with ADT enums. So you always have the fixed set of elements we support. And if this is something we decide to advance, we’ll get that into TypeScript as early as we can so people can start transitioning. + +DLM: You’re at time. Shu is on the queue. If you could be quick, we could do a call for consensus before we stop here, general support for Stage 1. + +SYG: A reminder to RBN and other folks in the committee. This is not a Stage 1 blocker, but as I have talked about before, I will be certainly assessing from the browser point of view any new proposals on how much is a pure DX thing that can be desugared, versus whether there are concrete benefits for the browser during runtime. So just keep that in mind. You have alluded to some run-time benefits, such as the fixed layout thing that can be taken advantage of. That is definitely I am looking at closely + +RBN: Fixed layout and the other one is type stripping. Node type stripping right now— + +SYG: That’s not a runtime benefit + +RBN: It is in that Node.js is not going to do a downlevelling of enums. So this is a feature that cannot come to node.js type stripping support unless there is a runtime capability. Even if it is a desugaring– + +SYG: We can take this offline. + +RBN: I would like to ask the committee for Stage 1 + +DLM: Support from MM, PFC, JHX, CDA., NRO. + +DLM: Okay. I think that’s it then. Thank you very much. And yes. Congratulations on Stage 1, RBN. + +RBN: If anyone has any other feedback, add to the enum proposal. I will have that migrated over shortly. Thank you. + +### Summary + +Proposed adoption of enums to ECMAScript roughly based on TypeScript’s `enum` declaration. Like TypeScript’s `enum`, a native `enum` would support enum members with initializers limited to a subset of primitive types. Unlike a TypeScript `enum`, a native `enum` would not support auto-numbering by default. So long as backwards compatibility is not affected with respect to auto-numbering, TypeScript has expressed a willingness to adopt a number of semantic changes to align with native support. Some concerns were raised regarding some of the proposed self-referencing behavior, which will be further explored during Stage 1. + +### Conclusion + +Advanced to Stage 1 + +## `Object.propertyCount` for stage 1 or 2 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/ljharb/object-property-count) +* [slides](https://github.com/tc39/agendas/blob/main/2025/2025.04%20-%20Object.propertyCount%20slides.pdf) + +JHD: Hi, everyone. RBR just became an Invited Expert. He and I are co-championing this proposal. `Object.propertyCount` is solving this problem that RBR is going to talk about is something I run into frequently, and so I was very excited to walk-through this when he approached me with the idea. RBR is a [Node TSC Technical Steering Committee](https://github.com/nodejs/TSC) and core collaborator. And I will hand it over to him to present better than I would have been able to do. Go for it. + +RBR: Thank you very much also for having me here. It’s the first time for me to be on the call. So very nice to—I am able to present. So like JavaScript I am pretty certain, every one of you has multiple times heard that JavaScript is a slow language. And thanks to JITs this is mostly no longer true, in most situations, and one thing is, however, that has bothered me, and because the language doesn’t provide any way to implement a lot of algorithms in a very performant way. And one is relating to counting the properties of an object in different ways. So it’s a very common JavaScript performance bottleneck I have run into. + +RBR: And the motivation is pretty much that any library or framework that you will look at is going to use at least `objects.keys` , some object and then the length directly. So what we are doing is we allocate an array for all the different properties on it, even like if it’s an array, for example—and yes that is also passed to `Object.keys()` for multiple reasons—an array can also contain additional string keys that are just not index and as such, you want to know if they exist or not and then you have to do that. + +RBR: So that is something very frequently called. We allocate that array, we—then need the garage collector to remove to get the number of the properties on it. Something similar is done with `object.getOwnSymbols` and object get own property names, especially with the symbols, mostly what algorithms are doing that I looked at is, filtering out new ones. So that is something very frequently happening. And it could be a Fath path, depending on the data structure these implications are using to check if there are if I non-enumerable symbols on or enumerable ones And so generally, it’s mostly use. As a fast path for things. And now let’s think about performance exist in total length. What do we actually have to do when wanting to call such API? Or when we do something as `object.keys` . The array create performance measurement is hard because we have a just in time compiler, a garbage collector. We have C++ and JavaScript boundaries to cross, maybe it’s not C plus plus, but across platforms. And all of these different aspects and different runtimes can have a huge impact on the actual performance. + +RBR: And still we can say, like there are a couple of things we can determine as that as this part in this runtime. So we have an initial API call cost. Which is mostly CPU time and not be able to overcome that cost with a new API. There is going to just be the new API call call os. One interesting thing is object get own symbols is, for example, in V8 currently, if I am not mistaken in C++ and not cross-platform assembler. What happens there is the initial call is actually very expensive in itself already. It would definitely be able to optimize that further. That is a reason for me to also just so similar API which I will come to later on again, instead of multiple. Because then we overcome any of these implementation difficulties. And so then the cost of traversing the keys is again mostly CPU bound. And with that, in this case, it could actually theoretically be improved if the compiler, for example, determines we want to track how many keys added since creation. And then just returns that number instead of actually iterating over all keys. That is the theoretical optimization inside the compiler. There are other things like proxy methods and so on. And it would have to be checked against these things and I am curious about discussions around the second point, with implementers for potential ways of dealing with that. What we can be pretty certain that is possible to optimize is the cost of allocating the area, which is both CPU and memory bound. And that is like something that is completely gone afterwards. We don’t need that. The garage collector won’t be necessary for it anymore, and the—depending on the concrete implementation and the runtime, there might also be cost involving, for example, index keys that are theoretically just the length. You now optimize something internally as I have index keys starting from 0 to 100. And now, there is, like, not a concrete key anymore. And instead, each time it has to actually create the string of that specific number. So that’s an additional cost, depending on implementation details in this case. + +RBR: Yeah. Effective performance. I thought about showing numbers or not. And I decided against it. Because this is so dependent on how the algorithm that is used really looks like. And also, the concrete implementation in the runtime. So like—from what I have seen, mostly, this is starting from, like, 2 digit percentages in average to, like, it can become very, very costly, depending on the algorithms and the object or array that is the problem. + +RBR: What use cases do we have? And definitely, something like input validation, I want to know if an object has a specific number of keys. To even continue looking into it. This is a common thing that it could have for a lot of APIs and, let’s say, on the back end side, for example, for HTTP calls and I want to know if mandatory number of properties is there, I don’t have to check. I can go immediately there and done. Which is great. In general, guarding against two input in many APIs is something we could have. Object comparison is a very, very frequently used case. And a lot of different algorithms that do object comparison, and in this case, we always have to do the comparison from both sides. So one side we could have to get the keys out of it. But from the other side, that is actually not necessary. You just have to compare the lengths and that can be optimized afterwards. + +RBR: Sparse array detection. It’s something I would definitely like to be able to do in the language. And that is is one option that I am going to propose because I want to determine if something is an index key and non-index string key or a symbol key, as a length. Like right now, and this is mostly APIs just are use cases, don’t do that. Because it is very costly to determine this. Or they just accept they might have a performance overhead for sparse arrays and by iterating over all the holes, or and they just return something like undefined, for the whole, so really it depends on the concrete implementation and what guarantees they want to give for that API. + +RBR: It’s a good detecting extra properties on array like object. So to know if, for example, an array, there could be additional property and now it’s easy to determine if that exists or not. Telemetry data and could be something because you want to get it fast again, testing utilities and just general a lot of fast paths are where it should be used. + +RBR: Property count the name it servings determined it should be relatively open. And it does not contain own, for example, for a reason. And the reason is that, in this case, theoretically, it would be possible to add another option at a later point to potentially add it. I don’t believe that’s a common use case, but it would at least keep it open. And otherwise, it is pretty clear about what it is doing. It’s accounting the properties on any kind of object. We have the target where we apply it to. In case it is not there—not on object. In that case, type array would be there. Something similar with the optional options object. If that is passed through, it is possible to it’s possible to differentiate in between a couple of different things. First of all, the key types. So I know three different key types: there are index keys, which are different in arrays and typed arrays. So that’s like a specialty that we would still have to look into. And also, non-index string keys and symbol keys. Most are—at least, for example, V8 does already differentiate exactly these three different types of keys internally. At least to my knowledge, I worked on this part a few years back, I believe it didn’t change since then. The default should align with object keys, so in this case it would be a combination of index and non-index string. To really reflect the most common use case, without providing any options. And then an additional option to check for enumerable properties. In this case, there’s three different values: `true`, `false` and `'all'`. From what I have seen in the wild, true is common. All is used also sometimes. False, I didn’t find. Which is an interesting aspect. That’s something we might look into. The default, is true. In this case, to reflect the same behavior as object keys as was the target, if any of is invalid, the typed array should be thrown. It doesn’t matter if as a wrong property key or value. + +RBR: I also considered alternatives in this case. For example, if—it shouldn’t have an array as key types. And we could instead have a flat object that has index keys, non-index keys and symbol keys as direct properties which each boolean to do something before. Not with the nested array in there. And otherwise, it’s identical to the former. So I am continuing. + +RBR: I already spoken briefly about options—did I? I am not sure. Options versus multiple methods as an implementation and it’s something I thought about. And the API is very important for me personally, I want anyone to use the API without having to think about the default use cases. That would be object keys. Right? So this is very simple and straightforward to use. And any expert could then provide additional options to it, to gain even more benefit in a couple more complex algorithms. + +RBR: It is also my experience in speaking from V8 in this case that the implementation for the fast APIs is actually slower in case we have multiple APIs because there is an additional overhead of the implementation to provide for all of these, and for having something like cross-platform assembler as an example, having a single one and in that case it is way less implementation overhead because it has been done once as soon as we overcome the difficulty between the JavaScript and C++ boundary crossing, that is something positive for performance. + +RBR: Why only own properties? I couldn’t think about any inherited properties use cases so far. I did not check for these. But I—I saw maybe if someone believes there is something necessary, at some point, we could still be implemented in the way this proposal is there. And if this API would be implemented in that fashion, that was mainly my thought. So I kept out `own` from the name for that purpose. + +RBR: In the repository actually provided a lot of different examples. And in this case, and Daniel (Ehrenberg?) asked for the different variations of the algorithms or the options being based. Angular uses the regular object keys one. But they also do object get own property symbols and filter out Number numerable ones. React has an implementation for object keys. They also do object get own property names as it is. And they use get own property symbols and filter. And also have something where they don’t filter out. As far as I seen those are the different use cases for React. Doe dash has it. And Next.js, all these different use cases provide all possibly combinations besides enumerable false. Node, for example, also has specifically index check. And in this case, it’s similar to object keys and filtering. We actually have a C++ API from V8 that we use instead because it allows a couple of APIs would be significantly faster, skipping non-index string in those cases. Or the other way around, actually, also. + +RBR: And index and non-index string as before, I believe node has like the biggest variety of different options that I found, and I also know about because I mainly work on node. And I knew most of these use cases before. + +RBR: Also, I believe that—yeah. So all these example and we are only taking from production code, and I tried to exclude any code where it would be test because I believe that’s actually, like, tests are also important to run fast, but would not be as crucial as other situations. So this is actually all production code. The real world examples that are values, as a set false, I don’t know if that exists or not. I cannot tell. Index property, I believe—I am certain that would be used as soon as it exists. The reason for not being found so far by me is probably just, if you have to determine how much overhead do I have to do something like that, in an algorithm, it’s so expensive, people decide they are not going to support that case. Then it is possible in the future. + +RBR: A couple of edge cases: Index properties. Which are difficult to determine because, like, the array indices versus typed array indexes, they are a different limitation. And I didn’t check again on indexes on any other object. I believe they also have specific behavior, I am I am not sure about this one. For example, lab prototypes is something that might have to be looked into it. In this case, I believe it’s relatively natural to just work as with any other object. So only like a property that is there is going to be counted. + +RBR: The API suggestion is meant to be backwards compatible and performant. Simple to use, flexible to use and someone brought up, it couldn’t be implemented with maps, but instead using object. So, first of all, maps normally address a different need than objects. For example, when you have just like one configuration, you normally don’t want to use a map in this case. You want to use something as an object. So this is something where the fast path would benefit from this API. On top of that, for a map, you always have to do the hash value which is a little bit more difficult to calculate, if I am not about mistaken, for a map than for object keys because object keys are actually only strings. And symbols. And so the algorithm behind it may be simpler, I don’t say it is, I say it may be simpler than for maps because you have to also differentiate between other types to accept those. For objects it would all just be coerced to a string. + +RBR: Next steps would be pretty much like getting your feedback, getting the input, addressing the comments and deciding if this could become a Stage 1 or 2 proposal. All steps for Stage 2 are as far as I know, already addressed. So I am thankful for your input now. + +JHD: Point of order in the queue. Just to jump in. We only have 5 minutes left in the timebox and based on some of the discussion in matrix, it would be great if we could first focus on queue items that might block Stage 1. And save the Stage 2 stuff for later or another time. + +CDA: Just a quick note on that, we do have some time on this afternoon session available. So we can go over to hopefully get through the entire queue. + +USA: Yeah. So based on this suggestion, I am going to go through the queue one by one and ask if there’s any Stage 1 concerns. KG, is yours—okay. So, MF, what about yours? + +MF: I think mine might be a Stage 1 concern. So I am coming at this from a viewpoint that this proposal was entirely performance motivated. It doesn’t seem to be the case that there’s additional power here. So with that in mind, like 20 years ago, we used to write JavaScript. And it was common among a large number of JavaScript developers that when you wanted to loop over an array, because that was the only way to enumerate, there was no for-of or forEach, we write a for loop and have a variable that increments until it reaches like `array.length` . Right? And people would—it was common for a lot of developers to take that `array.length` and cache it ahead of the loop so that it could hint to the engine or guarantee to the engine, the number of iterations of the for loop is not going to change, you are not modifying the array during this loop. That was a commonly known and used optimization. But there were a lot of JavaScript developers and a lot of people weren’t doing that. And they were still just like looping until `array.length` . And engines ended up just detecting the array wasn't modified in the loop and optimizing it. And it’s no longer the case, if you write a for loop and don’t use the modern facilities, that you would need to do this length caching. I think it’s the same case with this. I think this is a fairly simple pattern to detect where you don’t have to actually realize these intermediate data instructions and the engine can provide the result efficiently. And if it’s performance motivated, I would rather put my eggs in that basket, having the engine optimize that if it’s truly common in the ecosystem and common to be doing in places where performance matters a lot. If engines would like to speak up to, like, the difficulty of that, I would love to hear about that. But I have confidence there. If we hear negative feedback, I don’t think it’s worth pursuing this proposal at all. + +RBR: So since I’m not an engine implementor. I’m not the best person to answer it. + +SYG: I will just jump in. So your particular example, MF, is something that sounded like a hot loop. It’s a loop. And the opportunity is open to hot loops like that is pretty different because the intuition there it will eventually hit the optimizing tier. These examples, for this proposal, seem to be kind of all over the place. Not necessarily in hot code. And the optimizing opportunities for not hot code is a lot fewer. This is something that is I don’t think it’s basically possible or worth it to ever optimize in the non-optimizing tiers. And I think it’s worth—I am convinced by this proposal’s data for performance based on the intuition that this is a popular thing that people do all over the place, and you—the cost kind of adds up in aggregate. If it were only ever used in hot loops or hot code, I would agree with your argument, Michael, we can lean on the optimizing tiers to do the fancy optimization to get rid of the allocation, but that’s not the sense I am getting where this pattern is being used. + +RBR: One additional part to that, a couple of the algorithms are not doing `object.keys` , the object and then lengths. Sometimes they have in between calls or they just have a different algorithm implementation because nothing like that exists so far. So that is definitely also something that could never be addressed by any engine because it would just not be detected. + +USA: Okay. Let’s move on with the queue. If there’s no more after that… next is MM. MM is your topic— + +MM: My topic is not a Stage 1 blocker. It is a Stage 2 blocker. + +USA: Okay. Shu, what about your topic later? + +SYG: Similar. Stage 2, not Stage 1 + +USA: I think this is at for Stage 1 discussions. What do you folks propose? + +JHD: DLM, was your point something that needed to be addressed? I think you wanted to hear you. + +USA: DLM’s point is no longer on the queue + +JHD: If it’s not relevant for Stage 1, at this point— + +DLM: I will just jump in. We do optimize for this, as Shu pointed out, but only in hot code. So the optimization in the engine won’t apply in non-hot paths. Shu provided a good answer as to why my point wasn’t really relevant for this proposal. + +JHD: Thank you, DLM. + +JHD: So then I think based on the time, we should—let’s ask for Stage 1 and we will defer to a later time or future meeting to discuss the rest of the queue items and potential Stage 2. + +USA: All right. The champions are requesting Stage 1. We have one statement of support by DE, support for Stage 1. + +KG: I also support. + +DE [on queue]: +1 for stage 1 + +CDA [on queue]: +1 stage 1 + +DJM [on queue]: +1 for Stage 1 + +USA: Sounds like congratulations. You have Stage 1. + +RBR: Thank you. + +JHD: Thank you very much. + +JSH [on queue]: +1 for Stage 1 + +[returning to non-stage 1 blockers] + +KG: I was skeptical of the proposal, but I was convinced the basic use case comes up frequently enough that it makes sense to be in the language. I also searched our own codebases and got, you know, more than a dozen hits, so it’s not something that is just like random amateurs doing. This is a—it’s a common thing, even among people who are familiar with the language. So I am very happy to support the basic use case. I am extremely skeptical of everything that is not the basic use case. The key looking at index versus non-index versus, that’s an explosion in complexity in the API and it’s nowhere near as motivated. I couldn’t find any cases where I have needed any of those patterns or like my codebases have needed any of those patterns. You had a couple of examples on the slides, but I think they are much, much more obscure. And in particular, some of the things you mentioned I would like to actively discourage: having a fast path for sparse arrays. I specifically don’t want people doing that. Looking for non-index keys on arrays, I specifically don’t want people doing that. I think code should—with very, very few exceptions—be agnostic about spare arrays and should not put non-integer keys on random arrays. That this is something added to support those use cases, that makes me want it less and not more. + +RBR: May I address that part directly? So actually, I am 100% aligned with what you were saying about this should never be a use case. There is no doubt about it. The motivation, it’s actually to guard against the usage. So and like for most APIs, you want to detect them as outliers and want to—probably just reject them during input validation, for example. + +KG: Well, no. What I want is for you to not do that. I want you to—I want your code to not be aware that anyone might do such a thing. If they do it, that’s on them. Libraries should generally be written so that if someone passes in a sparse array, they treat it like a non-spare array. If the library is slower, this is the problem of the person using the API. + +USA: On the queue we have JHD. + +JHD: Yeah. That about the expando properties on arrays. RegExp match and stuff makes one of those. We do it. I am in complete agreement that good code doesn’t ever have sparse arrays in it. And it doesn’t ever—doesn’t create sparse arrays or make arrays sparse and doesn’t attach named properties on arrays and treats arrays differently than objects. It lists are different than property bags. I think we are actually largely aligned on what people should do. The use cases here are for any code that actually cares about the real world instead of just the idealized world that we all want needs the to do the checks and often makes things slower even for the good people who are not doing the bad thing, because we still have to check for the bad thing. And making those checks faster, allows code to be faster for the good thing people. It’s still going to be slow for the bad people because we have to do the slow thing if they have doing the bad thing + +KG: I disagree. I don’t think you need to accommodate people who are doing weird things. You can just not. It’s fine. + +SYG: (Index vs non-index) I don’t know about that. That does not seem like a good idea to me, to distinguish, to have a mode to count index versus non-index properties. It is true there are optimization within V8 that distinguish the use of index properties versus non-index property, but the only concept we align on, if we have a language feature is the spec notion and the spec notion is not how we these are represented in the runtime, but here are is string that happens to round trip to, like, an integer value within this range. If that’s a filter you want to built in that has complexity and I am convinced by the use cases we ought to add it, concretely, you may have seen there’s a field called is integer index on our name object. And if so, we have parsed the name already into an integer. A size T, it’s 32-bits. Integer index concept in the spec is 2 to the 53 - 1. So that is not going to work. There’s going to be more code that that—that will require if you support that mode and I don’t think that particular code is well-motivated. + +RBR: I mean, in this case, it’s more of, like, currently when we compare objects, like Kevin you said it’s ??? which is not detected. Some algorithms do check for extra property on an array. For printing them, for object validation for equality checks and similar, and this is where I know them from. And that’s why the index versus non-index one is an optimization for these cases. I do understand that it’s something probably not as frequently used. That’s completely understandable. And it’s theoretically, I mean, what could be done is to also like remove the specific one and initially something like that and consider a different mode at a later point as another proposal potentially, if there are more use cases together. Or look into it further to see where more uses in the wild exit for this particular… + +SYG: To KG’s point because this is an optimization proposal, because it is not an new capability proposal, if you choose to not handle a use case, that does not mean the use case is impossible. It remains slower. On net, like how much of a problem is it to have that one use case remain slower. My hunch is that the index versus non-index case is not going to be that big a harm to the proposal given the relative popularity of the base use case. + +KG: And to be clear, I am open to being convinced that this comes up enough in cases where the performance matters. But I think the printing use case is not motivating to me. + +RBR: So printing, why is that—like, in this case, on the browser, it’s not as crucial. And however, in node, for example, anything that is locked is super crucial to have and like the lowest probable CPU overhead possible because a lot of users actually logging a ton, and to inspect these objects, it’s going to block the event loop. And that is something that may not happen in any server application. So it is there the most important part to not do the CPU. + +KG: Right. I am not convinced by cases which are only relevant to node. Like, I think that those cases matter. But they are not sufficient on their own to warrant an addition to the language. + +MM: Defensive code that is trying to defend against any possible input that JavaScript code could send it is a very important use case and that kind of input validation for our code certainly needs to be much more faster than it is. This proposal would take some things that for us are performance critical and change from O(n) to O(1). We appreciate it. The main concern is around the distinction of index properties versus not. The distinction is crucial for us, for you are input validation and what is performance critical. Without that, I would find—I would have much less motivation to see this move forward at all. However, not all JavaScript objects have the same notion of index. And that raises an issue with regard to how this works across proxies. + +RBR: I am aware of those issues. Myself, I also don’t have a solution for that. On that we briefly discussed in an issue around it was that—we would, depending on the target, determine the limit automatically because there are normally specific limits towards those targets that would be one possible way of handling it. I am certainly we could make it more explicit as well. I am very open for input for this. + +MM: So let me ask a specific question here which just might settle it: if the distinction, if the target base distinction is only normal arrays versus everything else, which I think Mathieu told me it is, then since there is array that is array which punches through proxies, then there might not be a problem here. Except for the cost of doing this all over proxy. + +SYG: Yeah. Mark, I don’t think counting index versus non-index properties in the spec sense, if you include both array and typed array notion, of index, it will not be O(1). It’s cheaper than the current way, but it won’t be 0(1) + +MM: If it’s not 0(1), the—then if there is—then the question remains: what practical speedup would this proposal give us in general, and if the answer is not much, then again I don’t find the proposal very motivating. + +SYG: Are you talking about just the index case or the normal case of, like, just counting properties? + +MM: I am speaking specifically about the index case. Specifically—the case that we find frustrating performance wise, here is an array-like. Here is something that looks like an array. Does it have any non-index properties? That’s the check that we need to be fast, not proportional to the number of index properties in the array, which might be enormous + +SYG: You don’t care about the number of properties. Do you just care that it has non-index properties? + +MM: That’s correct. + +MAH: Yeah. We only care about that for array and typed array objects. + +SYG: I am going to say that sounds to me like a different problem statement than the problem statement presented. + +RBR: That’s fair. Actually, that is also how it is used in node, so to speak. In a very similar way. And I believe that is pretty much the common use case for the differentiation, like that is the main one. The second one would be determining and it’s a sparse array and to have a fast path for that. Which is less crucial. Like, that is not as big of an impact. But counting, or generating the index properties is very costly. Now, I know in V8, at least because I know—the other implementations are a little bit different– I don’t know how any other would look like. In V8 and this specific question would be answered in a O(1) for—does it have any non-index string properties? + +RBR: All right. Thank you very much. And also, thank you very much for the input. Like I am going to check for the begin comments and would like to then just follow up with everyone to see— + +SYG: I have a concern now. That corresponds to—that relates to Stage 1. Because what has been teased out in conversation with Mark and the mode use case, there is a different problem statement particularly about arrays that is not actually about counting properties. What is the thing we got Stage 1 on? + +RBR: I thought all of the API? + +JHD: Exploring the problem space of all of these things. That does—that allows for disregarding some of these things during Stage 1. + +SYG: Please enumerate all of these things + +JHD: So, for example, the—so if you want them enumerated, I had to defer to RBR. But my example would be trying to have a fast path for non-sparse arrays is one of the problem states and it would be fine if we decided during Stage 1 that is not a problem we’re trying to solve. While we continue to solve the other ones. I will pass it to Ruben. + +RBR: Yeah. So I proposed the three different values. Right? Like, index, non-index string, and symbol. What I have—like and there are—I know of four relatively frequently use cases. The most frequent one— + +CDA: We are five minutes over at this point. Is this something we can do by bringing this back in the meeting later? + +RBR: Yeah. Sorry. Like, understand there are other things. So I—I guess we should continue that. + +DE: Yeah. Maybe you could write that enumeration in the notes. + +RBR: Mm-hmm. + +MM: SYG, are you okay with this continuing with Stage 1? + +SYG: Let me talk to you later. I think so, but let’s talk later. + +DE: Yeah. We—maybe you should record in the conclusion that not everyone in committee was convinced of some of the aspect of the broader scope and some people wanted the scope to be narrower. That would reflect the state of discussion at Stage 1. + +### Speaker's Summary of Key Points + +Summary to be provided on continuation topic. + +### Conclusion + +Not everyone in committee was convinced of some of the aspect of the broader scope and some people wanted the scope to be narrower. + +## Explicit Resource Management + +Presenter: Daniel Minor (DLM) + +* [proposal](https://github.com/tc39/proposal-explicit-resource-management) +* [slides](https://docs.google.com/presentation/d/1F4kLwEUvBmyyTWq06HQgiJypcCWm3uwOzVDzFQ0xauE/edit#slide=id.p) + +DLM: Sure. Tough. I would like to present some feedback about the explicit resource management proposal. Quick reminder about what a specific resource management is. Basic idea the idea is to add a `using` keyword, along with a `Symbol.dispose` and the concept of `DisposableStack`. And generally the idea allows for automatic disposal of resources when the use—when using variable leaves scope. For example this simple little thing here. Where are we in SpiderMonkey. It’s fully implemented. It’s currently shipped in Nightly, but disabled behind a prop and the current implementation follows the spec. In particular, it’s currently maintaining an explicit list of resources to dispose at runtime. + +DLM: So a while back, SYG opened this issue. There’s a lot of conversation there. And it basically evolved into this. We would like to disallow `using` in bare `case` statements. So the example on the left, where you have fallthrough from case 1 to case 2, is what we would like to no longer allow. And you can insert braces and do your thing. In this case, it’s clear that the `using` is in a blocked scope that corresponds to that one case. + +DLM: My colleague IID provided a nice example of how this could desugar. So with case fallthrough, as you can see, things get a little bit weird. We would argue this isn’t implementation weirdness, but it’s actually a weirdness in how things are specified. And on the right-hand side we have the desugaring without fallthrough, which makes everything fairly clean and straightforward. + +DLM: So why make this change? Basically, this would mean that we would be able to know the scope of the `using` statically. So in our implementation, we could get rid of the runtime dispose that we are currently maintaining, and just synthesize try-finally blocks. This is more efficient and simpler. We believe it’s dubious at best that people want to have this kind of C-style fallback behavior when using. And we are willing to rewrite our implementation if this change is made. And we heard support from V8 who said they are willing to rewrite the implementation for this implementation + +DLM: Alternatively, why not just create a scope outside of a switch? We are doing our best to be efficient when handling switch statements. We currently have two at one much [#457BD]ling fall through would require adding a second pass or maybe making a new scope and removing if not needed. Doing time travel. This is possible. But it’s definitely extra work in complexity for code that ist most likely written by mistake, not on purpose. + +DLM: Concently, RBN was kind enough to put a pull request with these changes. Yeah. I would like to ask for consensus. On making this change, about pull request #14. + +MM [on queue]: Strong support of prohibition. Thanks! EOM + +USA: All right. We have MM on the queue says, strong support. That was all. Let’s give a minute or two, to see if folks have more thoughts. + +DLM: Yeah. There are comments with implicit support in the full request as well. I will share that NRO was positive with regards to this change about the Babel point of view. + +JHD [on queue]: switch is bad and it's ok if people can't use a new feature in it, +1 . + +PFC: I support this. + +USA: We have a lot of for support them, and no negative comments. So, DLM, you have consensus. + +DLM: Okay. Great! Thank you very much. + +### Summary + +Allowing the `using` statement in a switch statement with fallthrough complicates implementations. If we disallow this use case, implementations can desugar to try/finally blocks which is simpler and more efficient. The proposal champion put together a pull request for this change: [rbuckton/ecma262#4](https://github.com/rbuckton/ecma262/pull/14). + +### Conclusion + +Consensus to merge [rbuckton/ecma262#14](https://github.com/rbuckton/ecma262/pull/14). + +## Non-extensible applies to Private for stage 1, 2, 2.7 + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/syg/proposal-nonextensible-applies-to-private) +* [keynote slides](https://github.com/syg/proposal-nonextensible-applies-to-private/blob/main/no-stamping-talks/non-extensible-applies-to-private.key) +* [pdf slides](https://github.com/syg/proposal-nonextensible-applies-to-private/blob/main/no-stamping-talks/non-extensible-applies-to-private.pdf) + +MM: So, normal request for being able to record during presentation, including QA during presentation, and then recording off afterwards. Okay! + +MM: This is primarily by SYG and I. The actual proposal text was written by SYG. And this is—something that, and, and—this particular proposal has several motivations, but first, for its history, it is extracted from the stabilized proposal. So to just, from a very, very quick recap. Stabilize was proposing new integrity traits and broke it into these five or five element integrity traits to consider. And in the last meeting when we talked about stabilize, we explained our hopes and dreams, which is that the fixed integrity trait, which is the one that we’re talking about today, be bundled into the existing non-extensible integrity trait. It would not be a new integrity trait, but new behavior associated with the existing non-extensible. + +MM: And what this new behavior is about, is illustrated by the following code, the contrast between the subclass on the left and the subclass on the right. In this case, as an expository example, they are both from the same super class, `FrozenBase`, and the superclass constructor for whatever reason just freezes this. And on the left, the `AddsProperty` subclass, adds a public named property to `this`, but it is doing it, of course, after the `super` returns, before `super` returns there is no this that’s in scope and once super returns, this will, as you expect, throw a `TypeError`. + +MM: On the right, we have what is essentially the same code, expect that instead of adding a public property, we’re adding a private field. And today, this does not throw. This actually adds the private field, even though the object is already frozen, and because it is already frozen it is already non-extensible. The reason we get the `TypeError` here is not because it is frozen per se, but specifically because it is non-extensible. We think this is counterintuitive, that is one motivation. + +MM: So what we’re proposing is that non-extensible be—that the meaning of it be extended in a way that the claim is already intuitive, it is the thing that would be the least surprise. Such that attempting to add a private property to an object that is already non-extensible would throw a `TypeError`. + +MM: That is a nice motivation, it probably wouldn’t have motivated us doing something as dangerous as this. It’s dangerous in that what we’re proposing is noncompatible, we will come back to it. One motivation for this is that the struct implementation, the struct proposal, is proceeding as a separate proposal; a lot of the rationale for it is that structs are essentially better classes, and better in particular in ways that enable them to have a high-speed implementation. And the problem with the current semantics is that this extensibility of private properties, combined with the return override, can be composed to force the engine to add a private field to an already constructed struct instance. And given the way private fields are implemented by, as far as I know, all of the high-speed engines, they would then have a choice, which is give up on structs necessarily having a fixed shape, which would hurt the performance promise—or have a completely different path through the engine for adding private fields to structs that are completely different then they are for adding private fields to objects. Neither of which we like. + +MM: So with this proposal, this attempt to change the shape of structs, which is the only thing right now in the language that would imply that structs’ shape can change runtime—this would instead throw a `TypeError`. Instead, as far as we can tell, we can tell that structs can faithfully to the spec have a fixed shape high-speed implementation. + +MM: The other motivation is mostly hinted at by this piece of code. Which is that the ability to add private properties via return override to existing objects, essentially gives the language something that is very much like a WeakMap, but it makes it accessible by syntax. And therefore, also fairly global. + +MM: So over here, when we’re trying to reason about communication channels, this weak map reachable by syntax is a problem. Because you might freeze the class and freeze the prototypes and all of the methods, so it all looks like it has no hidden state here. And in the you take two other objects that you know not to have hidden state for the normal meaning of state, and thereby not represent a communications channel. And then one party might use this hidden map functionality to create a mutation of surprising state that the other party might not expect. And this hints at, you know, larger problems with virtualization that I can get into if there are questions. Let’s just say there are several different motivations that are quite crucial for both parties that all have the same simple solution. + +MM: And the solution is indeed quite simple. It is so simple that SYG initially raised the possibility of doing this just as a needs-consensus PR, which I will agree is reasonable. I prefer that anything that has semantic observable effect, especially when there is a danger of incompatibility, just go through the discipline of a proposal, but still one that we can hope to advance fairly quickly. These two changes is the entirety of the proposal. + +MM: These are the two operations in the spec that can cause a private field to be added to an object. And we’re just proposing that both of these do a precondition check, an input validation check, to verify that the object is extensible, and otherwise throw a `TypeError`. + +MM: Now, with regard to the potential danger of incompatibility, would this break the web? Google has generously already deployed usage counters to find out. And the bad news is that over time this has still been growing. It has not been asymptoting. But the numbers here are like 0.000015%. So, they’re tiny. And a little bit more, by the way, with desktop than mobile, I think the 0% here is showing on both is just rounding errors on the display. But these are the six websites all in Germany where a problem was detected. And for all of these six sites, there are only two cases. + +MM: One case is this weird piece of code that we don’t quite know why it’s—oh. The class is named `_`. So over here, it’s looping through the fields of under bar in order to initialize a private field of the `_`. Sorry, enumerating the public enumerating fields of `_`. And add a private field of `_`. But during the enumeration it is freezing `_` itself. The disturbing thing about the proposal is this code, for whatever weird reason might exist, is currently correct. And the price of accepting this proposal is that this code would start misbehaving despite the fact that today this is correct code. + +MM: And likewise, with the other case, which is perhaps more understandable. Which resembles our first counterexample. Which is a superclass constructor that freezes `this` and then adds private properties in the subclass. So given that in both of these cases we would be breaking correct code, which exists out there on this very small portion of the web, it is conceivable that browsers would not be willing to go forward on this. Google, as a cosponsor of this proposal, looking at these numbers, have decided that they themselves are willing to go forward. And it is also conceivable that non-browser implementations might object to the non-backwards-compatible change as well. That’s the question we have for the committee today. + +MM: And what we’re, we would like to ask for is first stage one, but since this was something that was reasonable as a needs consensus PR, we, we would, if we did a stage one, we’re going to ask for more. Which we may or may not get. + +MM: So first of all, may we have stage 1. This is the actual, official statement of stage 1. So first of all, any objections to stage 1? + +MM: Okay. Any support for stage 1? + +[on queue] - support for stage 1 from DLM, DE, and DJM + +DLM: SpiderMonkey team is favorable about this change. + +MM: Great. Thank you. So, at this point, I think we can say we have stage one. The stage two checklist we made for ourselves derived from the official statement is also committee approved and spec reviewers selected. This is the actual official statement of stage two. Can we have two non-champion volunteers to review? + +MF: I’m—I’m confused. We didn’t reach stage two, right? + +MM: Right. I’m asking do we need this to prove, so what I wrote down over here is to get to stage two, we need reviewers selected. Am I just wrong about that? + +MF: When we grant stage two, we assign reviewers. + +MM: Ah. Excellent. Excellent. So can we—first of all, are there any objections to stage two? + +SYG: I just wanted to give some more color on the data that was shown. So— + +MM: Great. + +SYG: MM is certainly accurate that Chrome is willing to try to ship this, by our suggestion. But that said, while the absolute percentage does seem small, due keep in mind this is sampling from page loads, and page loads are on the order of many, many billions. So, like even very tiny percentages can end up causing concentrated pain for a small percentage of folks who keep hitting the same error over and over, which might be bad. But in this case, the plan is that we already did reach out to this German GIS software, Cadenza, we have thought heard back, I will try to ping and follow-up with them, but the hope that since this is looking like an officially sold and supported product, that they would be responsive to changing that one piece of code to a static Initializing, which would be a very easy work around for their code. + +SYG: The other two websites that were broken that used the same Axial framework, which I cannot find any references to; if anybody is familiar with the German web design scene and firms that do that service, if they have any clues there, it would be much appreciated. But I don’t know how to do any outreach for that at all. But given it is just two, and one of them is a music festival website, traffic for which I expect will die down after the music festival is passed. It really just comes to this one other site. And I think that is not sufficient cause to consider it a breakage to not try to ship. So really right now, it comes down to trying to reach out to this dissy company that does the mapping software. + +SYG: I welcome any help that anyone wants to volunteer to also do the reach out if they are interested in also seeing this change happen. + +MM: Any objections to stage two? Great. Is there any support for stage two? + +CDA: Do we have any explicit support for stage two? So there’s nobody explicitly expressing aberrations. + +[on queue] support for stage 2 from DJM + +JHD: Yeah, I mean, it’s—I think it should be fine to, like I understand all of the rationales here and all of the cross-proposal, crosscutting concerns as to why this is valuable. And I see worse consequences if we don’t do this. So I like, but it is unfortunate because I really liked the simplicity of the weak map analogy for private fields. But like this does explain it, you know, somewhat cleanly. So as long as it is web compatible, like, go for it. + +MM: Okay. So I take it, I kind of take that as support? + +JHD: Yeah. Yeah. Yeah. It’s—support plus I wanted to grumble a little bit. + +MM: Oh, yeah. Okay. I had the same discomfort when the idea first arose in stabilize. Okay. So, good. Now, so now— + +CDA: Now, you need reviewers. Given there were no objections earlier, not seeing objections now and multiple voices have explicit support, you now have stage two. Which means now you need stage two reviewers. + +CDA: JHD has volunteered. + +MM: Great. Thank you. + +CDA: Do you think you need one more? Typically? We like to have— + +MM: Yeah. I don’t know what the requirement is, but certainly two is traditional. + +CDA: I think two is the minimum. + +DE: I’m happy to review. + +CDA: And Daniel will review. + +MM: Excellent! Excellent! Thank you. And now, could we possibly in the same meeting with, with two reviewers, do we need the reviews to happen before we ask for stage 2.7. If so, obviously, key can’t get stage 2.7 this instance. + +CDA: Acceptance criteria for 2.7 is reviewers sign off on the spec. This is required for 2.7. + +DE: Yeah. I did review the spec before the meeting. And I would sign-off on it. But it would need those other sign-offs, too. + +MM: Okay. Can we get those other—so the other people that would need to sign off, JHD, you said you’re a reviewer, would you sign-off on the spec text you saw? It is really the entirety of the spec text. + +JHD: I would have to go back and look at it. But I’m comfortable with conditional approval and I will check in the next 20 minutes. But the editors are the ones that definitely need to sign-off. Yeah. So yeah, I approve that spec. That’s fine. + +MM: Okay. Great. And—are there editors who could weigh-in in realtime? + +KG: Yes. Seems good. + +MM: I’m sorry, who was that? + +CDA: That was KG. + +MM: KG, hi. So, do you approve? + +KG: Yes. + +MM: Great. Is that sufficient? Do we need another editor? + +MF: I mean, I would personally prefer to have until tomorrow. I hadn’t looked at the spec yet. But I’m also comfortable deferring to KG. So that’s fine. + +MM: Okay. That’s great. That means that there is still a chance we can get it this plenary. So which is really the only thing I care about. It doesn’t have to happen in real time. And—all right. + +CDA: Okay. For the record, are we saying we are granting conditional advancement to 2.7, predicated on the editor’s sign-off? Now, KG just said he approved. MF was a little bit more ambivalent, haven’t heard from SYG. + +SYG: I wrote the text. + +CDA: I forgot you are cochampion on this. Your sign-off is implied. + +MM: Okay. And, and once we have all of the sign-offs, does anyone in the committee object to 2.7? + +MM: Great. And does anybody on the committee support 2.7? + +CDA: Nothing on the queue so far. JHD supports 2.7. + +JHD [on queue]: same support + +MM: Great, that means we do have conditional 2.7. Waiting on MF, correct. + +CDA: I believe you need two explicit supports. + +DE: I also explicitly support 2.7. + +MM: Okay. Great. Thank you. Okay! Great. So MF, I look forward to hearing more from you later. + +MF: Yep. + +CDA: All right. I believe, if I’m not mistaken, that concludes your topic. + +MM: Okay. Great. + +### Speaker’s Summary + +* MM presented a new proposal, broken off from [proposal-stabilize](https://github.com/syg/proposal-nonextensible-applies-to-private), co-championed by SYG and others. It proposes to make private fields respect `Object.preventExtensions` . +* This proposal would patch up the current counterintuitive behavior of private fields not obeying non-extensibility, prevent hidden state creation via private fields, and improve performance so that nonextensible objects can have fixed memory layouts. +* The proposal is not backwards compatible and might rarely break existing correct code. +* Google has deployed usage counters and found minimal impact, but some websites in Germany (some which use a German GIS framework called Cadenza) might be affected. One website has minimal likely impact; it is for a temporary music festival. Google is trying to reach out to the affected German websites and Cadenza, but further help with outreach was requested by SYG. + +### Conclusion + +* The proposal reached Stage 1. +* And it reached Stage 2 (reviewers JHD and DE, who already have signed off). +* And it reached conditional Stage 2.7 (conditional on pending editor approval from MF; editor approval from KG and SYG already were given). +* And it reached stage 2.7 later in the meeting when it got that approval from MF. + +## Continuation of Object.propertyCount for stage 1 or 2 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/ljharb/object-property-count) +* [slides](https://github.com/tc39/agendas/blob/main/2025/2025.04%20-%20Object.propertyCount%20slides.pdf) + +JHD: We have been discussing it in Matrix. So for the notes I can just say that it seems like what we need to do is potentially come up with a proposal for arrays that are sparse and then remove the index stuff from the property count and then come back—and try to address the concerns that folks have indicated. Like there is so much—the—the potential API surface or the potential solution for the property count proposal is large enough and there has been enough varied kinds of pushback, I’m not sure if is productive in plenary to go back and further right now. But we have a lot of people to talk to in the interim, I’m sure it would be a lively discussion at a future plenary. + +RBR: My suggestion for now would be just to remove the differentiation between the index and nonindex string and keys. I assume that would address the concerns in the room, but I could be wrong. I would go for that for now. + +SYG: I think we’re running ahead a bit. While a few of us did express specific concern of the API shape you presented, I’m very supportive of one problem statement I heard, performance issue of counting properties. You showed a bunch of examples that motivated it, the problem in the wild. During the discussion, another problem, which sounded like a different, very different problem to me, came up, which was around slow paths with arrays. Whether that is sparse arrays or native arrays with non-indexed properties on them. That is a different problem to me than counting properties. Perhaps the best way to solve is to count properties, but the problem that you’re trying to solve in the arrays case is not actually counting properties. Right? That is what I would like clarity on. The stage one that we got agreement on, is that for counting properties or counting properties plus whatever problem with arrays? My personal preference is, because they sound like very different problems to me, that they be treated as different proposals. But that’s my personal opinion. It is up to you all to decide how to frame the problem statement. + +JHD: My, my interpretation here is that—we phrase this as `Object.propertyCount`, counting properties, because that seemed like the only solution to all of these use cases at once. I would say that a more broader statement of what I was originally hoping to solve, is generally comparing and describing objects and arrays, and avoiding performance cliffs whenever possible. And it’s totally reasonable if—so that’s how I would, and Rubin can talk to this as well, that’s how I would personally describe the problem statement. Maybe workshop it, and try to come up with a shorter version. And I think it is completely reasonable to say, well, why don’t we narrow that, within stage one, into two separate problem statements, one about array and one about non-arrays and then have two separate proposals. Where, one about arrays might, for example, do like an `isSparse`, because it doesn’t need the necessary count them, it just needs to determine if there are any. Things like that. Does that broader problem statement of avoiding performance cliffs when comparing and describing objects work for you, SYG? + +SYG: I would like it more scoped than that. The general problem of avoiding performance cliffs I think extends to a lot of implementation details that may be undesirable to expose. In particular, you might care if an object is in dictionary mode, that is usually much slower than in fast mode. That is not anything we would ever want to expose to the web, but it evidently affects fast pass and slow pass, and I would categorically reject out of scope. And if it was broad enough just to avoid fast pass, the shape and object it is currently in, that sounds like it would be in scope. That broader statement, why I would encompass the array issue and property counting issue is to too broad for me to really figure out what is in scope that you’re thinking of. + +RBR: Yeah. So I agree to what you were just saying. And like exposing if something is in dictionary mode or not, I wouldn’t be interested in. I don’t believe that is useful because that’s something I believe is really up for the engine. + +SYG: Right. So that’s why I was, earlier I was saying I would like an enumeration of what you consider to be in scope. It could be that the problem statement— I’m totally happy with the broad problem statement “we want to avoid performance cliffs”, in particular, these performance cliffs. But all performance cliffs, that is pretty hard for me to think about. + +RBR: Yeah. So I believe there are already a couple mentioned. The question is, like—and if we want to address them all with the one API, and mention ones or if a couple should be separated? + +CDA: That is it for the queue. + +JHD: I mean, I think—I understand we want a specific problem statement that everyone is happy with for stage one. We should have this before the end of this plenary. SYG, it sounds like in spirit you’re okay with it, but we haven’t come up with a wording that avoids including things like dictionary mode and all of that stuff. Right? Does it make sense to you that if we—like, does that resonate? That we just haven’t come up with the phrasing we’re probably on the same page as to what we sound to describe? + +SYG: It sounds like you care about property counting and something to do with arrays, concretely and nothing else. Yeah, that sounds. + +JHD: Yes. + +SYG: So, I am very enthusiastic about property counting, solving that performance issue with the allocations. I’m very skeptical about how we can solve that at all. I still don’t think that is a stage one blocker. But if you choose to just glom together those two problem statements into one for the proposal, then just be clear that, you know, it’s, I’m very skeptical of the second part. + +JHD: Right. I would say for the time being we do. But based on all of the discussions it is highly likely that we would want to come back with a narrower problem scope in the future for this proposal, and perhaps a new proposal to account for the part that was removed. + +SYG: Yeah. + +RBR: Good? And like, I do have one question. That is like the differentiation of an array and an object. Because in the end, for me, as a user, an array is always an object. So I personally try to prevent it, but I have seen a lot of code just accepting any input which could be an array or an object and they just use, for example, `Object.keys()` on it. That's very, very expensive to do. So, that’s where I’m not certain about the array versus `Object.keys()`, how to differentiate them? + +SYG: Sorry, was that a question for me? + +RBR: Yeah. + +SYG: I mean, you differentiate them by—I see, okay. So, if the problem statement were improving the performance of counting properties, because you think in your experience, the performance of counting properties of arrays versus non-array objects is very different, that falls under that, that beginning, for that distinction falls under the problem statement of counting properties? + +SYG: Like, your problem statement, I want to solve performance of counting properties. Now you’re saying I went to solve distinguishing arrays and objects. Is the distinguishing thing like a necessary step to solve the counting of properties performance? + +RBR: No. They are just different kind of algorithms and for input validation, for example, you want to probably make sure that the array does not contain any additional properties on it. Yeah? + +SYG: Let’s take a step back. Now, I heard a third problem which is input validation. + +RBR: Yeah, that is something that I mentioned. + +SYG: Is the goal that you want one API? You have a list of use cases and want one API that fits all of them? Is that the actual goal? + +RBR: And the API just fits in different aspects. It is used as a fast pass. And that was like, I believe, also in a first, or second slide in this case. For many algorithms and from the use cases, I believe I mentioned seven. And there’s like a fast path in general is a very big one for a lot of things. For example, input validation as well. + +JHD: I guess to answer your question as well, SYG. It doesn’t matter how many APIs solve these problems, the more of them they solve the better. This specific solution happens to be one API that addresses all of them. If that API seems too complex for the subset of use cases that you or any other delegate finds compelling, then it’s fine, we can split that up into multiple separate proposals and APIs. You know, it is not a binary thing. Right? As RBR said, there are seven use cases, solving one is better than zero. And solving six is better than zero. Right? So, it is more that we looked at these problems and this API seems to address them all. And especially given recent engine concerns about the number of methods being added, it seems desirable to come up with one somewhat related method that covered all of the use cases. It is fine if that isn’t palatable. + +SYG: Okay. So the problem statement is here are these use cases. Which is the problem statement is just like, here’s a burn down list, we want to fix these. Is that the most accurate thing? + +JHD: Yes. + +RBR: Yeah. + +SYG: Okay. Okay. I see. Yeah, I don’t have an issue with, I personally don’t have an issue with stage one for that problem statement. + +RBR: Thank you. + +### Speaker’s Summary + +* The proposed problem space: Developers need a performant way to count properties on an object, without allocating intermediate arrays. + * `Object.keys(obj).length` is very common in real-world JavaScript code. + * Other use cases presented included input validation, object comparison, sparse-array detection, and telemetry, especially in hot paths. +* A proposed API solution: an Object.propertyCount function that takes an options object allowing filtering by key type (`'index'`, `'nonIndexString'`, `'symbol'`) and enumerability (`true`, `false`, or `'all'`). +* There was broad support for the core use case, counting enumerable “own” properties. +* There was pushback about various proposed features, especially about those dealing with sparse arrays, as well as distinguishing between index keys and non-index string keys. + +### Conclusion + +* Consensus to progress to Stage 1. + +## Continuation: Don't Remember Panicking + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-oom-fails-fast/tree/master) +* [slides](https://github.com/tc39/proposal-oom-fails-fast/blob/master/panic-talks/dont-remember-panicking.pdf) + +SYG: I wanted to understand, since it is a host hook, it does not callback into JS, which I’m not at all sure of, I think that would be disastrous. But just a host hook I’m not quite sure how this would help Agoric’s code. + +MM: So, right now, Agoric runs our critical code on XS. XS does immediately stop executing on out of memory, out of stack, internal cert violation. And XS is willing to give us an explicit panic built-in, but it would prefer to do it if the committee agrees on it, which is not an answer to your question. Because we are already talking about postponing the panic built-in until later. So, altogether, the host, the host hook by itself doesn’t directly help Agoric, it helps the language, the helps programmers writing in the language, it helps the committee, and I would argue it helps the website of standardization, because it gives the, all of those standards processes a place to talk about realities like out of memory that right now the spec is in denial of. You made a claim, when we were, you know, previously, that I have continued to not understand, maybe you can clarify. The claim was that the host continuing execution after an out of memory, continuing to execute JavaScript code is not a violation of the spec. I certainly agree that it is reality that memory does exhaust, but I don’t see how it cannot be a violation of the spec. And I would also ask you, if a particular host continued not by throwing an exception, but by simply having the place where the fault occurred return 7, would you also consider that to be not a violation of the spec in something that programmers should know to be prepared for? + +SYG: So, I do retract my statement that it is not a violation of the spec. I agree with you that it is a violation of the spec. What I was driving at was that it is not a violation of the spec that is useful or can be acted upon in anyway, that is impossible to conform to the spec as you have previously pointed out for that particular violation. So, while it is pedantically speaking a violation. I don’t think it of as— + +MM: So, since it is reality, and since we are a standards committee that has basically two primary audiences: people writing JavaScript code and people implementing JavaScript engines, and since whatever the hosts do, the people writing JavaScript code need to know about this, this gives the hosts an opportunity to explain the behavior of their fault handler so that JavaScript programmers can consult that if the host documents it—we’re not insisting that the host documents it. But for example, on the website of standards, the, the purpose of all of these standards committees in the first place is to reduce the gratuitous behavior differences between browsers. That was sort of one of the core initial motivations for both web standards and TC39 originally. This would explicitly make it discussable in web standards without web standards having to stay: This is how we violate the spec. It can say, it could, it could, within the JavaScript standard, the web standard could say, here is the behavior of our default handler. We are not demanding that they do that it provides the opportunity. + +MM: And for those who are formalizing JavaScript, like the case folk in South Korea, you know, are doing heroic job of turning JavaScript into something with a formal semantics such that you can do proofs about JavaScript code, right now, the path of least resistance is, that I believe all, everyone has done formal semantics in JavaScript is following is the spec text. To assume that actually covers the contract with JavaScript code and even leaving what the host is doing in the fault handlers unspecified to simply say that these conditions delegate to the host handler would be a very good hint to those formal semantics that they should include the possibility that—that possibility is part of the semantics, so that proofs of correctness of JavaScript code do not prove correct to code that does not work in reality. + +CDA: MAH? + +MAH: Yeah. So I think host by itself, as MM said, that doesn’t achieve much. But it gives us a place to discuss what happens when the situation occurs. And in particular, the hope is to also have a mechanism to configure the behavior of the host. So if we encounter an out of memory condition, or a user panic or something like that, that the creator of the agent or the first run script is saying “if I encounter these conditions, please kill me instead of continuing”. This is, in particular really useful for workers that have been spawned out of the main thread, as it may not always be possible to reliably notify the main thread that such a condition occurred. For User memory, could do it. But a worker, I would prefer for the supervisor, the main thread to let known that has happened, for the work to be killed then for the main thread to let it know that it has happened so it can take further actions. + +ABO: Yeah. The HTML standard has a section about aborting a running script, which is in the case of HTML is sometimes needed for killing the current document. It also discusses things like memory limits and so on. Or like, if an API blocks the main thread, such as `window.alert`, that the user can block script execution. Or in particular, if the script execution is disabled in the middle of an infinite loop, the HTML spec describes like the running script should be killed. So, maybe this should be moved into Ecma-262. I don’t know. But yeah. This currently isn’t in the Ecma-262 spec. But it is not like it is not spec’d for web browsers. [MM shows slide 28, with HTML issue] This is not related to the three HTML issues that MM is currently showing in the current slide. This is something that has been a part of the spec for a while, but I guess it is kind of related. + +[https://html.spec.whatwg.org/multipage/webappapis.html#killing-scripts](https://html.spec.whatwg.org/multipage/webappapis.html#killing-scripts) + +MM: Yeah. So if all are other places in webstandards that I should know about that discusses this issue, please let me know offline. These were the three that I found. But these three are specifically talking about what the minimum abortable unit it, they are talking about I’m here calling the static agent cluster, which is just, in my opinion, unfortunately large. But once, but—once again, the fundamental ask here is the host hook. And if the host hook wants to terminate the behavior and wants to terminate something larger, than the minimum abortable unit, I wouldn’t, I would just like to give it the opportunity to document that’s what it is doing. + +ABO: I was mentioning this in response in particular to the concern about having the host hook not respond. + +MM: Oh, oh. + +ABO: So I think that would be allowed by this section of the spec that allows aborting a running script. + +MM: Right. Okay. So yeah. So right now, the actual text in the ECMAscript spec that the host hook must return a normal completion or throw completion. So you’re saying the HTML description of browser behavior, exactly that the host, whether it is the host hook or not, there are conditions that the host neither returns a normal completion or a throw completion, it doesn’t proceed into JavaScript at all. I would like the JavaScript spec to acknowledge that might happen. Otherwise the HTML spec and the JavaScript spec are simply logically incompatible.. + +MAH: I’m not sure if that is quite true. If there is no further execution in that environment, it is not observable that, it is basically equivalent to the host having never returned. + +MM: Well, yeah. But this says that the host hook must return. I’m being picky on language here. Maybe it is not what it meant to say. That is what the actual text in the JavaScript spec says. + +MAH: Yeah, I’m not sure how that would be observable anyway. If it doesn’t return. Yeah—I’m done. + +MF: Yeah. I don’t—think I have the same hang up as you do about the phrasing here with the host hook. We say *what* it must return. We don’t say *when* it must return. + +[Laughter] + +MF: It could take until—just before the heat death of the universe and then return. Like is that the same for you? I don’t think it is as strong a requirement as you’re reading it. + +MM: That’s, that’s—okay. The spec is trying to not simply be, you know, be denotationly correct, but it’s trying to be explanatory. But anyway, I’m not terribly hung up with the particular text here. What I do, what I am, what I do feel strongly about is that the spec itself should somewhere, whether it is here or not, should somewhere be clear that, that—following—you know, going back to the original motivation, following in particular the out of memory, which is still the most problematic case, that that is part of reality and that some hosts, you know, different hosts might choose different policies. But somebody doing a semantics in JavaScript and other people trying to prove correctness of their programs with reference to the mechanized semantics in JavaScript. Any such semantics of JavaScript needs to take into account the possibility that hosts might resume JavaScript execution if they indeed might. Otherwise, you approve correct programs that then misbehave without their being a bug anywhere. + +CDA: All right. MF, you’re also next on the queue + +MF: Yeah. I just wanted to kind of get an understanding of the relationship between what browsers do today when there’s an unresponsive main thread and they kill that versus like what your proposed minimum abortable unit is. Have you looked into what the various browsers kill? What that unit is when it is unresponsive? + +MM: I don’t know. From the three—the three HTML discussions I take it that, that there are at least considering standardizing the standard agent cluster. Which is sound. But it just seems unfortunately large compared to the minimum that they could do instead. But I don’t know. + +MF: But they are considering in theory, right? Have they discussed at all what is done today? + +MM: Okay. So—so—enough browser makers in the room. Could some browser makers comment on what they think they do today? + +SYG: Got, authoritative, I think it kills the process. There is no notion of a dynamic agent cluster. I think that would be pretty much unimplementable and nondeterministic. Like we’re talking about figure out what portion may be live and reachable from which thread and find the minimum set of such threads with somehow just join those threads. So I don’t think that happens. So it is just a process. + +MM: Okay. So Chrome likely static agent, well, I’m sorry. Is static agent cluster and process the same thing? + +SYG: I—don’t know. Like I think so. There may be some origin stuff in play if you want, like same origin agent clusters and stuff like that. Outside of those details it is pretty much a process, I’m sure. + +MLS: I believe it is the same thing with Safari it kills the web content stuff that is running the pain. + +MM: Oh, the other place with the correctness issue in absence of the spec omitting various faults that came up in my talk is the infinite loop. Code might actually engage in an infinite loop in order to prevent further progress from that point. Engines might in violation of the spec continue process in that agent past that point anywhere. And the code trying to protect itself now does damage with corrupted state. State that observers definitely outside of the program should not have been able to expect. + +CDA: SYG? + +SYG: I’m supportive of figuring out how to better talk about real world resources in our specification. I’m not supportive of the goal of adding a host hook to the hopes of eventually exposing it as a configurable toggle. + +MM: Okay. Can we divide that line even into separate questions? I certainly want to expose it as, as one way you can opt into something other than the default behavior. But let’s separate out that question. Since your supportive in general of the JavaScript spec, being more explicit with regard to these problematic conditions, specifically, resource exhaustion. Would simply the host hook as an explanatory mechanism, as some place where, for example, the HTML spec could explain what browsers agree on, Or individual browsers could explain what they do. Does that by itself do you—is there anything about that by itself that you would object to? + +SYG: As an editorial explanatory device. Sorry, I see MLS is on the queue. + +MM: But, but, yeah, I would like to get your response. + +SYG: As an editorial explanatory device I would prefer that we reflect reality through other editorial ways. Because a host hook here suggests two things. One, that is somehow configurable by the host and programmable by the host. Whereas I think the whole way to reflect reality, is to say this implementation is fine. + +MLS: Yeah. Yeah, just a quick comment. We've actually written tests to make sure that we can recover from an out of memory exception when there's a proper try catch that would pop off the frames that are responsible for that. So, it's kind of tricky to code that in a reliable way, but we can recover from that. the engine itself doesn't die when the user creates an out of memory exception due to what they're + +CDA: All we are past time, but Matthew is on the queue. + +MHN: Yeah, really quick. I didn't quite understand SYG's reply at the end. Because a host already has a choice today of either raising an error when an out of memory error or it can take the choice of panicking the agent. So it is already a reality that the host is free to decide this + +SYG: Yeah, a host loosely speaking is a collection of implementations, HTML is a host of the JavaScript spec. There is nothing we can write in HTML that would be beyond it if it is just implementation defined. + +MM: I think we can adjourn at this point. + +MM: I think we received a lot of good feedback. and clearly SYG and the champions can continue a lot of this conversation offline as well. + +CDA: Okay, thanks. That brings us to the end of day two. See everyone tomorrow. + +CDA: Big thanks to everyone and especially our notetakers for the day. Really appreciate it. + +### Speaker's Summary of Key Points + +* MM presented the “Don’t Remember Panicking” proposal, renamed from the Stage 1 proposal “OOM Must Fail Fast”. +* The presented problem is that robust transactional code (e.g., financial applications or medical devices that need integrity more than availability) need to be able to explicitly request termination when unrecoverable runtime faults occur, yet JavaScript hosts today handle these fault conditions inconsistently. +* The new proposed solution: + * A HostFaultHandler hook to deal with internal faults within the current “minimal abortable unit of computation”. + * A built-in Reflect.panic function for developers to explicitly invoke the HostFaultHandler hook. +* There was pushback against Reflect.panic and giving web developers the capability to excessively halt programs, particularly webpages. It was proposed to split Reflect.panic into its own proposal to allow the rest of the host-handling mechanism to be considered separately. +* It was pointed out that there is no current common interoperable behavior defined for when browsers run out of memory. There was extensive discussion over the extent to which real-world resource management and fault conditions are already currently specified by Ecma262 and HTML, and whether they should be developer configurable. +* There was general agreement that Ecma262 should more robustly specify the current reality of how memory and other real-world resources should be handled. + +### Conclusion + +* Extensive discussion. +* Still in Stage 1. diff --git a/meetings/2025-04/april-16.md b/meetings/2025-04/april-16.md new file mode 100644 index 00000000..56d483bc --- /dev/null +++ b/meetings/2025-04/april-16.md @@ -0,0 +1,994 @@ +# 107th TC39 Meeting + +Day Three—16 April 2025 + +## Attendees + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Nicolò Ribaudo | NRO | Igalia | +| Michael Saboff | MLS | Apple | +| Samina Husain | SHN | Ecma International | +| Eemeli Aro | EAO | Mozilla | +| Jesse Alama | JMN | Igalia | +| Dmitry Makhnev | DJM | JetBrains | +| Richard Gibson | RGN | Agoric | +| Philip Chimento | PFC | Igalia | +| Daniel Minor | DLM | Mozilla | +| J. S. Choi | JSC | Invited Expert | +| Bradford C. Smith | BSH | Google | +| Ben Lickly | BLY | Google | +| Ashley Claymore | ACE | Bloomberg | +| Istvan Sebestyen | IS | Ecma International | +| Ron Buckton | RBN | Microsoft | +| Chris de Almeida | CDA | IBM | +| Jonathan Kuperman | JKP | Bloomberg | +| Aki Rose Braun | AKI | Ecma International | +| Shane Carr | SFC | Google | +| Zbigneiw Tenerowicz | ZBV | Consensys | +| Gus Caplan | GCL | Deno Land Inc | +| Mikhail Barash | MBH | Univ. of Bergen | +| Ruben Bridgewater | | Invited Expert | +| Daniel Ehrenberg | DE | Bloomberg | +| Michael Ficarra | MF | F5 | +| Ulises Gascon | UGN | Open JS | +| Kevin Gibbons | KG | F5 | +| Shu-yu Guo | SYG | Google | +| Jordan Harband | JHD | HeroDevs | +| John Hax | JHX | Invited Expert | +| Stephen Hicks | | Google | +| Peter Hoddie | PHE | Moddable Inc | +| Mathieu Hofman | MAH | Agoric | +| Tom Kopp | TKP | Zalari GmbH | +| Kris Kowal | KKL | Agoric | +| Veniamin Krol | | JetBrains | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Erik Marks | REK | Consensys | +| Keith Miller | KM | Apple | +| Mark S. Miller | MM | Agoric | +| Chip Morningstar | CM | Consensys | +| Justin Ridgewell | JRL | Google | +| Daniel Rosenwasser | DRR | Microsoft | +| Ujjwal Sharma | USA | Igalia | +| Henri Sivonen | HJS | Mozilla | +| James Snell | JSL | Cloudflare | +| Jan-Niklas Wortmann | | JetBrains | +| Chengzhong Wu | CZW | Bloomberg | + +## Intl Era Month Code Stage 2 Update + +Presenter: Shane Carr (SFC) + +- [proposal](https://github.com/tc39/proposal-intl-era-monthcode) +- [slides](https://docs.google.com/presentation/d/1wvJoRFa8nRjlYSHuVLpxx-wCfwt4H9NIw2fsGJ72gxs/edit#slide=id.p) + +SFC: I’m going to be going through the Stage 2 update on Intl error month code. First I’ll start with a little bit of a reminder. What is this proposal? The goal of this proposal is to make—is to make operations in Temporal be interoperable across calendars and eras. For example, there’s a lot of things on this slide here that are not specifically covered in the Temporal specification, and yet are things that we think developers should expect to be interoperable. So, for example, if you specify the year 10 in the era BH in the calendar “islamic”, that should correspond to the year 10BH in the Islamic calendar that you wanted. Although, using calendar “islamic” here is a little bit misleading as you’ll see later in the presentation. But we believe that in type of operation is something that should be able to be made interoperable. So in other words, every conformant implementation of Temporal should have the behavior listed on this side, and it should work. + +SFC: There’s been a lot of changes since last time this has come up. So just a little bit of a preface here is FYT, my colleague, did a lot of work on this proposal a couple of years ago and then sort of took a break from working on it and Temporal in general. And more recently, you know, I’ve sort of taken up the mantle of this proposal, so I’ll be sharing the updates today. So one of the biggest changes was in era codes. + +SFC: So previously we had had a—we had been using a scheme for era codes that favored—that had certain properties. It favored a general framework that would apply to all the different calendar systems without having to dive into the details of any individual calendar, but also had the property of all era codes being globally unique. Basically what we did was we took the BCB47 era ID for the alleged (?) and used it as the error code, and then for calendars that had reverse errors like the BC era in Gregorian, we talked on dash inverse at the end (of the name). That’s what we had previously done. + +SFC: However, we got feedback from a lot of different delegates that this scheme was confusing and one of the feedback that I think resonated with me was one that since these are also the names of the calendars, using them as the names of eras as a categorical error, having to repeat the same word twice and read the same word twice when you’re reading and your code and getting things in the debug output doesn’t really tell you what an era is, and using the names—using the actual names of the eras is more useful there. + +SFC: So we’ve now adopted something on the right, in the new column here, which uses the names of the eras as the identifiers, so in order to generate these, basically the rule was if there is a commonly known English/Latin acronym for the era, use that, so that in script, rather, and if not, use a transliteration for it. So many of these calendars have well-understood Latin scripts acronyms, and the ones that don’t such as the Indian and Republic of China (ROC) calendar, we use the transliteration, so you can see that there. + +SFC: One thing I want to point out is that the codes are no longer globally unique with these scheme. For example, the eras named "am" means three different things depending on what calendar you’re in. In Coptic, it means [INAUDIBLE] and in Ethiopian it’s amete-alem, and in Hebrew, it’s an Ormondi and at least one source suggests that "am" can also mean minguo in ROC. So, you know, we definitely lose that property by adopting this new scheme. + +SFC: Next slide, era of arithmetical year, this was a concept we have in Temporal. It means that when you create a date with the year, but without an era, what era do we use as the index for the year? And we had previously not clearly defined this. The main thing here is that for Chinese and Dengue, we’re now using the layered ISO year as the medical (?) year, so it means you can write code such as what’s shown on the bottom. This is what the Temporal polyfill is already doing, even though it wasn’t written in the spec anywhere, and this is what the polyfill was doing, and I asked users on the ground and this seems to be straightforward to people. Basically using the western year that has the greatest overlap with the lunar year. So as an example here, the Chinese year 2024 starts, you know, in some time early in 2024 and ends early in 2025. It means if you write code such as this, I wanted to highlight this because you end up getting, you know, month 12, day 8 ends up being in ISO year 2025 even though it’s in Chinese year 2024, but this seems to work the way expect it to work. I wanted to highlight that here. + +SFC: Next slide. Hijri calendars, we did a lot of research on hijri calendars. And most Islamic countries it turns out rely on physical observations of the moon. We can’t accurately predict what Tate t date for the hijri calendar will be for any region. There’s an interesting calendar about Ide this year, which is a celebration for the year of Ramadan, and half of the countries observed it at the crescent moon on the 29th and half of them observed on the 30th, and it meant that half of the people in the world ended their fast a day earlier than the other half. And this kind of thing is not a kind of thing that we’re currently able to represent in a software because it requires, like, basically realtime live data. So the simulated Hijri calendar is at best an approximation and not something that can be used for displaying a date, an actual reliable date, because of that problem. + +SFC: So I’ll point out that, for example, some operating systems such as Windows actually solve in problem by allowing users to go and set a number in their operating system to, like, adjust. I guess every new moon, like, you’ll go in there and set is it plus one or minus one or zero, like, your adjustment, and that’s a proposal we could possibly entertain for ECMA 402. We’re not currently doing that, but, you know, this is a direction we could possibly explore. But the problem of simulated Hijri calendars not really working is still there. + +SFC: Another problem with these simulated calendars is round-trip-ability. So a Hijri calendar are subject to change over time. They might month round-trip, so for example, if you were to create a date in year 3000 and you try to recover it later, that may or may not actually be recovered because the Hijri calendar simulations are subject to change in order the better match ground truth, even he it’s really hard to match ground truth. + +SFC: Here is a draft solution. Here is a draft, we discussed it briefly TG2 and we’re not 100% aligned on it, and I want to show you the direction we’re thinking of. And there’s links so you can comment on this. The sort of direction we’re thinking about is focusing on the three Hijri variants that have some sort of meaning that have truth. And there's the official Hijri of Saudi Arabia. And they publish the almanac even into the future. They use astronomical calculations to compute their almanac. So then we basically ship the results of their calculations. And that works for several hundred years, a range of several hundred years. There’s two others which are based on a well-known ironic(?) cycle algorithm, tbla and civil. Which is called tabular Type II epoch and those are ones we can ship. In certain regions these calendars are sort of used as reference points when can’t do the observation, and then you might use these. So I want to show on the next slide an example of what this might look like in code. + +SFC: So currently the calendar named Islamic is the Islamic simulation-based calendar, but then this proposal is that the calendar Islamic will then resolve Intl dateTimeFormat into the calendar such as umalqura. For example, if you’re in Saudi Arabia, it might resolve to umm-al-qura. As are reminder, this—the type of thing is already done by Intl dateTimeFormat. Intl dateTimeFormat already has behavior of mapping calendars when calendars are not supported. If I put in a calendar here like here where that is not supported, in en-US locale I’ll get the default calendar Gregory. There’s nothing new here in terms of algorithm. Everything here is already conformant with the spec, which I just want to call that out, because that’s very important. And on the Temporal side, Temporal is strict. It will support only the calendars that you give it, that it will support. It won’t do the fallback. + +SFC: And the constraint that we’ll have is anything a Intl Tate time format resolves by calendar will be resolved by Temporal. That’s the constraint, so you can continue to use your code that loads the locale calendar and passes it to Temporal, that code should continue to work. We’ll continue make sure that that constraint is upheld. + +SFC: Let me keep going through the slides. I want to highlight that spec text is still in progress. Like, a lot of these things are written out in issues, but not yet reflected in the spec text. If you go check the spec text, it’s still the version that was there two years ago. Hopefully this will be resolved. Yeah, relationship with CLDR. We made the recommendations to CLDR. What happened before Is that we asked CLDR, can you come up with error codes, and CLDR came up with error codes and they’re the ones we ended up not liking in Temporal, and in order to not repeat that mistake, this time we as with the, like, champions of this proposal and in TG2 came up with these recommendations codes and we sent it to CLDR to adopt. + +SFC: And so far it looks like they’re likely to adopt most or all of our recommendations. And this will be a better outcome. + +SFC: I also what to highlight that this is not specific to date code. And this issue of #2869. So there’s a problem here about what go do you with distant dates. Say you’re 25,000 in the Chinese calendar. That’s very far away. We cannot accurately predict the ground truth that far into the future. And, you know, in general, like, dates more than a few hundred years away are not that widely used. Like, you know, like, I don’t know, something—let’s just say 99% of dates that represented in computers are probably within a span of 100 years, right? And then it goes down and, like, there’s a long tail there. So these are very rarely encountered dates. It’s already somewhat unusual to encounter, you know, dates in these—in, like, the Chinese calendar, but then it’s even more rare to encounter them more than a few hundred years away. This is definitely an edge case. + +SFC: This leads to two different camps we had in TG2. One camp says "this is an edge case, and it doesn’t matter what are we do in the edge case, so let’s go ahead and fall back to an approximation". The other camp is "this is an edge case, and we should inform the developer this is an edge case by throwing an exception". The exact same facts lead to do two different interpretations. The philosophy the Temporal champions of are generally employed is no data-driven exceptions, and given that developers are not likely to thoroughly test locales in their application, we apply best effort behavior. Intl, you pass whatever locale you want and it will give you some result. And that result, you know, could improve as more data gets added, as more locales get adds, and it will give you some result. We call that best effort behavior. So this code here, like I’ve written here in, code that I think should always work. Where you take your locale calendar and then you give it some Sate, which could come from an external source and you’re able to get a Temporal date in that calendar. Like that, code should not just break randomly. And in an implementation dependent way. + +SFC: So my preferred approach, which is currently the approach posted in the issue, and I haven’t seen a, you know, really a viable alternative to this. If you have one, post it in the issue, is that we fall back to the approximation for Int (?) dates and do the best effort behavior, and we can have a follow on proposal for users who care about this. We go ahead and expose information about is this a safe date, safe is a word that we can debate, and, like, basically is this date backed by an almanac or a reliable form, and the answer for Gregorian will be true for the whole range and Hijri will be however lock the almanac is going and so forth. And this could also be reflected in the Intl formatting and such. That’s the end of my presentation. Happy to answer questions. And, yeah— + +USA: Let’s go to queue. First on the queue we have Steve nix. + +SHS: Is there an option for requesting Hijri adjustments from the OS? + +SFC: Yeah, we don’t currently have the Hijri adjustments, you but that could be definitely something we could explore. We should make an issue about that. So thanks for bringing that up. + +DLM: Thank you. First I wanted to say I just wanted to thank SFC as the members of TG2 for taking the time to investigate the Hijri calendar and the astronomic simulations, that was something that in some ways arose from our implementation of Temporal. I just wanted to talk a little bit more about that, just bring color to what Shane said. So we were— we’re definitely concerned about the Islamic calendar, which is an astronomical simulation of moon rise. The implementations in ICFRX do not agree. It’s a simulation, not an observation, and there’s currently no way of specifying the observation point, and IC for C and IC for X, the last time I checked, were using separate observation points and I don’t believe the ICU4C is specified, and there’s been at least some reports from users that were generating not just inaccurate dates, but also impossible dates, for example, months with the wrong number of dates. So I agree with everything that SFC said, and I wanted to make sure that others in the committee were aware that the simulations are definitely problematic and at least we’re exploring the possibility of not shipping them, which also aligns with what SFC presented, but we have very little evidence that these are being used on the web, and that’s by examining corpus on websites and I’m planning on adding telemetry to see how much use we see. + +USA: Next is you again, DLM. + +DLM: Yeah, separate point, I just add on to what SFC was presented about out-of-range states. Using the example of the Chinese calendar where things are, you know, maybe 25,000 years in the future or something, which sounds like a lot, but as he also alluded to, for Hijri, if here using a tabular data source, we might only have a few hundred years in the past and a limited window in the future or no window in the future for which the dates will be accurate. + +USA: And that was it for the queue. Would you like to make any concluding remarks, Shane? + +SFC: Sure. + +SFC: Since I’ve taken over this, we probably should have maybe another Stage 2.7 reviewer. Seems like Dan Minor has been quite involved with this kind of thing, so he might be a good choice. + +DLM: I’d be willing to do that, and I can also ask if Henry would like to have a look at it since he’s also been quite involved. But I can volunteer, and perhaps Henry will take it over. + +SFC: Thank you. + +EAO: I’m happy to continue. + +USA: For the notes, that was Eemeli, and that was it, I guess, right, Shane? + +SFC: That’s all I have for this topic. + +### Speaker's Summary of Key Points + +SFC: I gave an update on Intl error month code, focusing of the changes in terms of error codes, years, simulated Hijri calendars, and out of range dates. The exact details have yet to be actually written down in spec text, but I anticipate that that will happen soon. I hope to come back to commit’ for this proposal going to Stage 2.7 in an upcoming meeting this year. We currently have, I believe, the stage 2.7 reviewers from the last time we presented this were me and Eemeli. + +### Conclusion + +An update was given. + +## Compare Strings by Codepoint for stage 1 or 2 + +Presenter: Mathieu Hofman (MAH) + +- [proposal](https://github.com/tc39/proposal-compare-strings-by-codepoint) +- [slides](https://docs.google.com/presentation/d/1eTuB1jjgb2_xG_zMNmkhleJx1F0QviMEwkkBUL9ezPQ/) +- [pdf slides](https://raw.githubusercontent.com/tc39/proposal-compare-strings-by-codepoint/19c5470bfb02acb4988708f5979d12720fa4c4c7/compare-codepoint-talks/compare-by-codepoint.pdf) + +MAH: I am here to talk about string comparisons. So first, a little reminder, what exactly are strings? In the ideal Unicode says they’re a sequence of values, which have code points between U+0000 and U+01FFFF minus a range that is used for UTF-16 surrogates. In JavaScript, we represent strings as a sequence of 16 bits code units. That is the UTF-16 encoding of those unicode codepoints, while allowing lone surrogates, so we can have technically malformed Unicode strings in JavaScript. Any Unicode values outside of the basic multilingual plane are encoded as a surrogate pair, as two code units. For humans, what strings are is just a sequence of graphemes. It’s what they can visually recognize as characters, and that can actually be a series of multiple Unicode codepoints. A classic example is emojis which are usually a combination of Unicode values. Here is a bit of an example of how a string that appears to humans decomposes in graphemes and code points. The letters that I used in the word “emoji” here are not all in the Latin range, but that are lookalike letters, some of them in full width, some of them in the mathematical range, and then I put an actual emoji, which decomposes in multiple code points. + +MAH: In JavaScript, what the composition actually looks like, if you look for the code units, you can see that the codepoints that were in the higher range actually decompose into multiple code units. All right, so code units are unfortunately a concept that, for historical reasons, show up in a bunch of places throughout the language. Whenever you try to access a string through an index property, you are actually talking about the code unit position in the string, and that means all String APIs that talk about offset or length is related to the code units implementation of the string. When you are trying to match or test a string with a RegExp, similarly by default, it matches by Unicode code units unless you’re using specific Unicode RegExp flags. + +MAH: When you’re comparing strings, array sorts or just the regular less-than or greater-than operators, you are also comparing the string based on its code unit representation in JavaScript. But these days, there are alternatives that allow you to actually work with a string’s code points. When you take a string and you iterate over it, you end up iterating over the string as a series of code points. You can ask what a code point at a certain code unit position is. While the input offset is in code units, what you get out is the full codepoint without breaking the value up. To be able to match or test a sequence of code units in a string, you can use the “u” or “v” flags for RegExp, and now you have recovered the ability to match the string by Unicode code point. For comparing strings, though, it’s less clear what you can do if you want to compare a string by using its code points. There are some comparators in the language that are codepoint aware, but let’s look a little built closer into exactly what these comparators are. + +MAH: There’s two of them, there’s localeCompare on the string prototype and also the new Collator compare in Intl. Effectively, these are the same, as far as I understand, maybe someone can correct me, but as far as I understand, they behave the same. They both are locale dependent. So because they’re locale-dependent, any changes in how Unicode says the locale should be treating some characters that can change over time. And the other thing that since they’re locale-dependent, it varies on the environment in which JavaScript runs and what the locale implementation is. This is a variation of not being stable, like, it depends on what the current implementation is, and that can also change. There’s actually a proposal about having a locale that’s stable that is in Stage 1, but that wouldn’t quite help because there’s also another issue with locale comparators: they’re meant for humans. What that means is that they do some special processing for some characters. + +MAH: So there is a series of characters that are defined by Unicode to be confusable to a human, that means they basically look the same. As I mentioned earlier, I used the word emoji, but I used characters from different ranges that, for a human in some conditions, often look the same, but actually are not the same Unicode value, and the locale comparators group those together: so they will not compare the same, but they will be next to each other in the comparison. It also collapses characters in the same equivalence class. So in Unicode, there’s often different ways of representing the exact same character. + +MAH: I’ll give a couple examples now. These are the results that you get from using the locale comparators with some of these values. Here I used a full-width Latin letter, whose code point is in the basic multilingual plane but above the surrogate range. I also used the mathematical character, which is not in the basic multilingual plane, and then I used just a Latin-1 character. If you sort them through the built-in comparison for string, you end up getting something that is not in the Unicode order because the surrogates for the mathematical character ends up being implemented as two code units and ends up sorting before the full width letter. If you sort them by locale comparators, you end up, as I said, because the locale comparators do group confusable characters, you end up sorting it by what humans would consider the sort order, which is ABC, in this case. + +MAH: And finally, here is an example of characters in the same equivalence class. This is E with an accent. If you compare these two Unicode characters, even though they are represented differently, they end up comparing the same. + +MAH: So what this proposal is about is the request for a portable comparator. Why do we need a comparator that compares by code points? Well, we need it for data-processing, really. As I mentioned, the locale comparators are code point aware, but they’re meant for humans, and have sorting rules for them. We need something that is meant for computer systems. And mostly, for compatibility with other systems. There are many languages these days that represent strings as a series of UTF-8 code units. Some examples are Swift, Go, Rust, there’s probably a bunch of others. + +MAH: In particular, to us, SQLite uses it for string representation by default. And the property of UTF-8 is that when you end up doing a byte comparison of the UTF8 code units that is in the same order as the Unicode code point, so all these languages and systems end up sorting strings by their Unicode code points. So what I’m proposing here is something like `String.codePointCompare` , a comparator that compares by code point values. The exact name can be decided, but the outcome that I want is that when we’re applying it to the example values I had previously, the sort order would end up being CBA, which no other comparator currently gives me. + +MAH: Why do we need this? This is an example for our use case—in the proposal repo, I have also linked to some Discourse discussions about some requests that are similar to ours. But in our case, we implement custom collections. So these collections have a well-defined sort order: each type comes before another in that sort order, but within the type, we use the intrinsic order for that type. For numbers it is obvious how they sort. For string we want to use a well defined string order. And then for types that we cannot compare, like object references, we either use insertion order for object references or we don’t allow other incomparable values in these collections in the first place. It’s not very relevant, like the rules of our collection, but basically what we need to understand is that systems have collections that don’t use insertion order that use a well-defined sort order and strings need to have a well defined order in that case. + +MAH: What is interesting to know about our collections is that they can have different backing stores. To users, they have the same interface, they work the same, and some are backed by the JavaScript in ephemeral memory when the program restarts they’re gone, while others are durable and backed by a SQLite DB under the hood. And this is where a compatibility question comes up. We need the compatibility between iteration of these two implementations. And that is it for my presentation. Any questions? I see a lot on TCQ. + +KG: Yeah, I’m in favor of this. I’m also in favor of having more easy comparators in general. Like, don’t we also have a way of comparing numbers in the language, for example. And if I can find time, I will try to pursue something along those lines in the near future, and I think that that might end up affecting the design of proposal. Probably not, but if we are going to add a bunch of comparison operators, I think it would make sense for them to be as coherent as possible. I don’t think this is a blocking concern. Certainly not at this stage. It might be something that we would like to think through before Stage 2.7, though. Anyway, I support this going forward. + +MAH: Thank you. + +SFC: Yeah, so you sort of talked a little bit in your presentation about, you know, the use cases involving SQLite, et cetera. I guess I wanted to—I was wondering if you could elaborate a little bit more on, like, what the—like, what are the advantages you see in terms of having this implemented in the standard library as opposed to in userland, you have your shim on slide 13 which is, like, you know, 10 or 12 lines, it’s not that hard to write in userland. Are your concerns about, like, this being, like—a built-in being more efficient? Are you worried about this code here being, like, tricky to use correctly, or are you more concerned about, like, this is a very widely needed use case that is, you know, motivated because everyone should be needing it? + +MAH: A little bit different parts of this. So let’s actually start with the last one. I believe most people don’t realize that they’re doing the wrong comparison on string and they’re using UTF-16 code units when they should be doing some other comparison, depending on what the intended sort use case they are looking for. In general, the regular sort comparison is not what they would want. The other part is performance. Yes, you can implement this in userland. However, not all engines implement strings the same way under the hood. Sometimes it’s more efficient to iterate over multiple strings like this using iterators, sometimes it actually is more efficient to use an index and use the codePointAt. This one is tailored to the engine that we use the most, but that doesn’t mean this is going to be the most efficient throughout. And no matter what, a native implementation is of course going to be more efficient than the userland one. + +SFC: Yeah, I have a—just two replies to those. The first one was that, so you said that UTF-16 sort order is—I forgot the exact adjective you used, I think unexpected or wrong, I forget the exact adjective you used, but it’s a well-defined sort order, and it’s the most efficient sort order that’s going to be possible in—you know, from UTF16 strings. And it’s perfectly fine, if you need the property of strings being sorted, for example, if you’re using sort of like a b-tree map that requires the property of a total order of strings, code order is fine, UTF16 order is fine. Right? So then this gets a little bit into my—well, actually, no, this is this comment. I guess it gets a little bit into my comments that I’m coming up with later, but I guess I’m, you know, a little bit confused by your assertion that UTF16 order is wrong. Like, because it is fine as a total ordering, and if you want a human ordering, that’s what Intl collator is for. And UTF8 order is no more correct than UTF16 order, because they’re both total orderings of strings. + +MAH: It’s okay if the only systems your program is interacting with are systems with a similar encoding. Any time you have to deal with another system and need to process your data in the same way, the UTF16 encoding is most likely not going to be appropriate. + +SFC: Okay. Yeah, I’ll save more, because I have another topic about this later. The other thing about—so my next comment was, yeah, if performance is a concern here, if we could, you know—I think it would be obviously helpful to see benchmarks. If a proposal is—if this proposal is being motivated by performance, it would be nice to, you know, maybe have, like, a WebAssembly implementation versus, like, this shim and then see if one is significantly faster than the other or some other way to, like, give, like, a ballpark for what the performance is going to be somehow. But, yeah— + +MAH: I’m not sure how—I mean, besides having this implemented in the engine that we use I don’t see how I can get performance numbers, because WebAssembly, there’s a bunch of other overhead that would come into play. WebAssembly doesn’t have a string representation, so it’s a can of worms. I’m not sure how I can get performance numbers for a proposal for Stage 1, besides doing the implementation in an engine. + +SYG: Just a clarifying question. I think folks in the Matrix helped me clear this up, but I want to check with MAH. By portability. What do you mean by portability? I thought you meant some code doesn’t work exactly the same across systems. Do you mean that, or do you want the semantics to be easily understood without surprise by JS programmers working across both JS and, say, SQLite? + +MAH: The second part. As I mentioned, we have collections that are implemented by—that have two backing implementations. One is a heap representation using JS maps. And another one is by SQLLite. And when we’re—so when we’re iterating over that collection, because the collection has well defined sort order, we end up iterating over according to the encoding system in the backing implementation. So in JS we use Maps, but we sort the keys, which ends up using the native sort order if we’re not careful. We actually had some issues where we forgot and ended up using the native sort order in the heap implementation, and that would iterate over keys in one order—if you use the three letters from the example, it would come out as CAB. If we used our SQLite implementation and relied on the SQLite implementation order, we would end up with what I actually expect, which is CBA. + +SYG: And the point, with your custom backing store collection is that is stored by SQLite, at some point SQLite is sorting and gives you the sorted order? + +MAH: Correct. When you get the results in SQLite from a query, we’re asking SQLite to sort by keys, and it automatically sorts the keys according to its string representation. + +SYG: Okay, that clarifies for me, thanks. + +USA: Reminder that we have around five minutes to go, and then a few items on the queue, so let’s be brief and quick. Next we have—oh, I assume MAH, that you want to proceed with the queue. But you can ask for Stage 1 at any point or sort of prioritize the queue as you see fit. + +MAH: Yeah, let’s go over a few more items. + +ABO: Yeah, so I think this is needed because it’s not just that the regular comparison gives a different result. It’s that most developers are not aware of the details of encoding and would not expect JavaScript to give a different result than Python or SQLite or Ruby or so on. And, like, even I, who I’m familiar with encodings, UTF-8 versus UTF-16 and surrogates and so on, I was implementing the, like, sorting of strings in Nova—I don’t know if you remember, Aapo Alasuutari had a talk on the Helsinki plenary last year, and I was implementing string comparison on that engine, and we have strings as UTF-8—or, well, WTF-8, extending UTF-8 to have lone surrogates—and I didn’t realize that the regular comparison would not match JS. And if I didn’t realize that, when I’m comfortable with encodings, definitely the average developer would not be expected to realize it. + +ABO: We not only need to add this, but we need to let developers know that they should not use regular comparison when they’re interfacing with other systems, unless they know that the other systems are using UTF-16 code units—which is JavaScript, Java, I think C#, and not much more. Well, everything else pretty much uses the equivalent of comparing with UTF-8 or code points. + +MAH: Yeah, I mean, it’s the same for us. Like, we know about Unicode, and we forgot about the comparison when we were sorting. So I take your point that this is going to require some developer outreach. + +MLS: I think your shim answered my question. And that is do you plan that code point compare would sort multicode point emojis, for example, and I think the shim does that. + +MAH: I mean, it sorts them by their individual code points. + +MLS: Right. But if you have two emojis and they differ in the third code point, it’s going to sort them based upon that comparison of the third code point? + +MAH: Yeah, correct. + +SFC: Yeah, this is a little bit from my previous question, but if you’re interoperating with someone like SQLite and someone a uses UTF8, presumably you have UTF-8 strings in memory, like an ArrayBuffer and using a text encoder. If you have already have the UTF-8 strings you should be sorting on the UTF8 strings, not the JavaScript strings. I was just wondering if you could address, like— + +MAH: We actually don’t—so what happens is that the UTF8 strings are stored in SQLite, but when we read them out, they basically come out as—as JavaScript strings. Mostly through JSON parsing. So I’m not going to go very deep in details, but yes, at some point, we have it binary form, but I’m not even sure in our system we actually ever end up seeing an ArrayBuffer of those. + +SFC: Okay. + +MAH: I think the only place where it shows up is in bindings of the SQLite library. + +USA: We are almost at time. There’s two more items on the queue. But MAH, you might want to— + +MAH: I will ask Stage 1 here first. Do I have some support for Stage 1? + +WH: I support Stage 1. This should have been done long ago. It fixes a bug that dates back to the beginnings of Unicode. + +MAH: Thank you. + +USA: Also—thanks, KG. Also on the queue, we have— + +USA: We have MF with support, JHB supports Stage 1, and CDA also says support Stage 1. Let’s maybe give a couple more seconds for any more comments. Also on the chat, MLS with more supporting comments. Congratulations, you have Stage 1. Would you— + +MAH: Let’s go maybe to—WH, do you have anything else to say? + +WH: The only other comment I had was that this really has nothing to do with UTF-8 since UCS-4 also sorts in the same way as UTF-8. + +MAH: Just UTF-8 was the most common case, yeah. + +WH: This happened because surrogates were added late to UTF-16 when Unicode folks realized that they’d need more than 64K characters. They couldn’t use the encodings at the end of the 16-bit range, which were already used for other things. This causes the irregularity when you compare surrogate pairs with characters between U+E000 and U+FFFF. + +MAH: That’s exactly the problem. That’s a problematic range, exactly. + +MAH: MF, I would love to hear your question, if we have time. + +MF: Yeah, I can do it quick, sure. + +MF: So, yeah, I generally support the proposal. But in your examples, you showed the kind of assumption that there would be a single function that compares your strings, and I think there might be some more general thing underlying here that we could do. I would like to see during the Stage 1 process that you explore solutions that maybe are a bit more general, where we take just two arbitrary iterables and make it more ergonomic for the string use case have a string iterator that yields numeric code points rather than the single code point strings that the string iterator does. I think we could have a generic solution that's still sufficiently ergonomic. We could probably do both, but I would like to see that explored to see how good it would be, if that’s a possible solution on its own. + +MAH: I think KG expressed something similar in the past. My main concern with that approach is because it’s relying on iterators, I am not sure how well engines might be able to optimize for it. Here at the very least, the engine can recognize the sort function being passed to sorts and doesn’t technically have to invoke it. Iterators are notoriously hard to optimize. + +WH: I don’t understand MF’s comment. I don’t know the generalization of — + +USA: Unfortunately, they are on time, though. We would have to bring this back to a continuation + +MAH: Michael, can you file an issue in maybe—that will help Waldemar understand the request? + +MF: Yeah. Will do. (opened [#6](https://github.com/tc39/proposal-compare-strings-by-codepoint/issues/6)) + +MAH: Thank you. + +USA: And thank you MAH. + +### Speaker's Summary of Key Points + +I presented a proposal for stage 1 to explore string comparison by their unicode codepoint. The motivation is compatibility with other languages and systems that use that sort order. There were some clarifying questions regarding when different string comparators should be used, and a request to explore the intersection with iterator based comparators. Some delegates highlighted that the default sort order can be surprising for any developer not familiar with JavaScript string encoding, and a need to document this better. + +### Conclusion + +Stage 1 + +## Update to Consensus policy + +Presenter: Michael Saboff (MLS) + +- [slides](https://github.com/msaboff/tc39/blob/master/TC39%20Consensus%20Apr%202025.pdf) + +MLS: This is a continuation from our conversation that we had in Seattle. And I asked for an hour, I don’t think this is going to take an hour. But we will see. This is caused conversation in the past. I think from Seattle there’s general agreement there is a problem we need to deal with single dissenters. It’s rare, but there’s been some issues in the past. There’s also, I took away from Seattle, there’s no desire for like a major process change. That we—our social norms seem to be enough to guide us for 9X% where X is a pretty big number. 98%. And it also, I took at here’s no need to have two objectors. I originally proposed 5% at Seattle and people thought that was too onerous and have to figure out what is 5% so on and so forth. + +MLS: There was some sensitivity having dissenters from the same ECMA member or possibly different members with a financial arrangement between them. So, for example, in part of Apple, if the two dissenters were both Apple, Apple delegate, that doesn’t seem right. I agree that’s something we should figure out how to do. + +MLS: And last, I think MM brought this up, and others, any system we come up with can be gamed. So any changes we make are not going to change that. Maybe make it more difficult to game it, but it’s my hope that members, the TC39 members are acting in good faith. And generally, I believe that that is the case. So this is the kind of take-away, I took from Seattle. + +MLS: So the goals for TC39 decision-making is, in my mind, an orderly, deliberate, open and welcoming and inclusive process. That those in attendance delegates in attendance, and by the experts, they can discuss and evolve JavaScript for the whole ecosystem. You know, not just developers or implementers, but everybody, including end-users. And it should be based on social norms than flexibility in the system. We agree that general consensus where we had a proposal that went to Stage 1 and there was general agreement among people, let’s investigate codePointCompare. + +MLS: So I am going to propose a minor change. Withholding consensus and the check marks we already do, maybe need to remind us, but delegates clearly explain the reasons, including possibly acceptable changes to a propose, so that they would support it. I am going to skip the second line and come back to that. We do want the reasons for withholding consent to be recorded in the minutes to be helpful for not only the champions, but also other people to remember, go back and remember why something didn’t move forward. And that withholding delegates are willing to discuss with champions possible path forward. The last two things are being done. + +MLS: What I would like to propose and I think that MM came up with is that we don’t have to necessarily have a second delegate withhold consensus. They could also basically voice support. And I am going to—if two delegates withhold consensus. If somebody makes a motion and somebody seconds it, I think that’s what we are discussing in the case of somebody that voices support for somebody else that is withholding consensus. + +MLS: And so we basically have two people and we don’t want them to be from the same ECMA member. Again, this could be gamed. Because members could agree. People from different member could agree to withhold consensus. But again, it’s looking for this to be done in good faith. + +MLS: And so that’s one thing I would like to discuss if we have a second delegate, or I second that or I support that, they don’t have to necessarily—they may not think that they would withhold consensus, but they understand the reasons why a dissenter does withhold consensus. + +MLS: And the last point is, can an invited expert withhold consensus. I think they could be a supporter, second it, but the reason I am bringing this up is because you look at ECMA bylaws, and only members are allowed to vote, for example, probably at the May meeting, we will vote on—I think we already voted on ECMA 2025 sending it to the executive committee and the GA, but only members can vote on that. + +MLS: So this is what I am proposing. So it’s two things: one, that we have a second dissenter, second person withholding consensus. Or a person that supports the sole dissenter, and they—neither these can be from the same member company or having an obvious financial relationship between themselves. And then I would like to discuss where invited experts fit into this policy. + +MLS: So that is it. I don’t have the queue available because of how I am sharing the screen. But I will leave this slide up. So… That was 9 minutes. Let’s see how long we discuss this. + +DE: Well, I think it—this is good to have a way to overcome certain vetoes, and good to acknowledge the state of our decision-making procedure. One thing that Rob and the chair suggested last meeting was around having—you could call it cooling off period. Anyone can block, including invited expert, maybe, a proposal during one meeting. Say, it’s not going to advance this meeting. And then we cool off. We—the objector or objectors clearly state their reason. And then at a follow on topic, very subsequent meeting, we can have an agenda item which is, you know, considering moving past the or overriding the objection. And so then we can make a presentation which is the person who decides they want to specifically invoke this procedure, makes a presentation, explain the objection. Explain why they think it shouldn’t be a blocker. And then we see if there are multiple people objecting to it. So this procedure could be invoked no matter how many people, if it was one or multiple, who gave the specific observation. And then the committee given sufficient time to think things over, could make a collective decision on whether to move past it. I think this—this thing about taking time to overcome objections is more important than some of the details about the threshold, whether it’s two people or more people, whether it includes invited experts. I think the most important thing is that we’re very conscious and resolute when we make these decisions. + +MLS: DE, if you want to bring that forward, that’s fine. I haven’t thought about that and worked through that. I generally agree that something like that would be useful. But I think that’s for another time. + +DE: I could make another presentation about, this where I propose this. I was— + +MLS: That’s what I am saying + +DE: I do think we should adopt this—these two things together, though. + +MLS: Okay. + +DE: Because if we just do the kind of weakening without this other safe guard it could leave us in tricky situations. I do want to understand better why you think that this is kind after separate something from what you have proposing. + +MLS: Because I haven’t thought about it. I would like to— + +DE: Okay. Sounds good + +SHN: I just want to make a comment, it’s not necessarily a question. It’s come up on the MLS’s slide and it is the question of invited experts as you are all aware invited expert is based on ECMA rules. They don’t vote. I understand in TC39 when you do temperature checks it’s different than voting. I think here in this particular discussion, perhaps we need to think about the invited experts and whether they can hold—withhold consensus. Ideally, I don’t think that would be a way to go forward, but I leave it there for discussion. + +CDA: So just on this point, I am not on the queue, but it’s sort of been long-standing practice that invited—— in the spirit of committee, blocking concerns from invited experts are respected as are blocking concerns from people who are delegates for ESM members that are not able to vote. So I don’t think it would be practical to go down or fruitful to go down the slippery slope of determining whose voices are more important than others. + +SYG: Could you point me to some examples for invite—I am not exactly sure when have invited experts actually block. + +MLS: We could. + +CDA: Long-standing historical precedent comment… + +DLM: We’re opposed to the financial relationship qualification. We have a financial relationship with Google, as does Apple. And our current process requires basically implementations between V8, SpiderMonkey and JavaScriptCore. So if we move ahead with not being able to do anything based upon function relationships, a proposal could have advanced to Stage 3 and then 2 to 3 implementations. So don’t think this is right + +MLS: DLM, I wasn’t thinking about the financial relationship that you have with Google and apple has with Google until the—I was thinking of the contract financial arrangements. But yeah, you bring up a GAD point. + +CDA: All right. NRO? + +NRO: Yeah. I agree with not having a blanket on blocks from companies with financial relationships exactly for the reason DLM just said. But simply good to have some wording about that. There are, like, cases, but I work for Igalia and they are paid to work on this. And like it just should be disallowed for some other company in the committee to like try hiring as just Google proposal with them. And like, all of this needs to be somewhat based on good faith. Because, like, we cannot enforce this. But at least having some guidelines, some wording on this saying, like, it would be good. + +SFC: Yeah. Regarding the financial relationship thing, DLM already brought up, you know, the three browser implementer problem. But the other thing is basically, very many of the organizations here, you know, have financial relationships with, you know, companies like Igalia and so forth. But Igalia is also quite a big company with a lot of different, you know, delegates working on a bunch of different proposals and it doesn’t necessarily, you know, make sense that, like, you know, if one delegate working on one proposal, you know, like, backs you know a delegate from a different proposal that should be disallowed. Like, it’s almost a thing, if they’re in very—a very tight relationship, but again that’s—which I think is sort the spirit. They are in a tight relationship. But that’s just very, very difficult to define. Yeah. That’s all. + +CDA: Thank you. There’s Michael Ficarra with a + 1 to NRO’s comment. Let’s go about further on the queue to DLM again + +DLM: I wanted to point out that the current process allows for blocking solely on something being late—added late to the agenda. And I think that’s important to maintain. I don’t think we should need a second—a person to second that, if people legitimately haven’t had a chance to review something because it was added late to the agenda. That should continue to be a sufficient condition to withhold consensus. + +DE: Yeah. DLM, I think—I didn’t think about that. Including that here. But agree that if you are not in a ten-day window that’s more of a process thing that’s not based upon we don’t like this or want this change, kind of thing. I support that. + +DE: Yeah. If we do say that multiple meetion to overcome a block, then I think this followed naturally and that’s a benefit for the fundamental reason why Dan is raising this, because everyone should have a lot of time to review things and think them over + +CDA: For the record, for some reason, your mic went really quiet that time. Anyway… Waldemar is next. + +WH: I am a bit uncomfortable with creating second-class citizens out of invited experts. Can invited experts still review proposals? + +MLS: I don’t have a problem reviewing proposals. I think the issue is more of keeping with—with ECMA’s bylaws and policies. And you can think of the case, again, I—in my mind, gaming the system, where invited expert comes from one meeting to go block a certain proposal. Not that that is going to happen. But it could. + +WH: A lot of things _could_ happen. But I think we’re focussing too much on the identity of whoever is supporting or opposing rather than the rationale. I think the rationale is more important. I don’t see that much of a difference between invited experts and academics, other than official standing within ECMA. TC39 has explicitly not done formal voting other than the annual votes to push out a new version of a standard — + +MLS: Well, actually, TC39 does more voting than probably all the other TCs because a dissenter is a negative vote. It’s a veto. So we vote far more often than the other TCs. All the other TCs work by consensus and don’t take votes except when they are advancing a new version of a standard. So we do it quite often. Not every meeting does we have a dissenting vote on a proposal moving forward, but that’s a vote. So I want us to recognize that. + +WH: I disagree with this characterization of everything as a formal vote. I am also uncomfortable with not being able to support proposals. Or are you saying invited experts can support, but cannot oppose? + +MLS: I am saying that invited experts should not be a lone dissenter but certainly they could give support + +WH: That seems wrong. + +SYG: I’ve been somewhat uncomfortable throughout the entire—not this discussion, but throughout, like, every TC39 the whole working history, I’ve been involved in TC39, I have never really quite understood to what extent we are to uphold the ECMA bylaws because we seem to operate in opposition to a bunch of the bylaws. I understand we have a lot of sway and I understand we have been operating with our own way for a long time. And—but like we are still a body within ECMA. And we have kind of a legal and IP umbrella through ECMA. So I am not—I don’t even understand what is the flexibility that we are afforded here? Like, it seems to me for the invited expert question, the ECMA bylaws are pretty clear. So if we’re—if this is actually under discussion as SHN suggests, as she herself said, I would like to hear better from ECMA administration what they see as the flexibility that the TC39 has to be opposed—to operate in a way that isn’t according to the bylaws. + +SHN: SYG, thank you. A fair question. You know, this discussion is raising a point or multiple points on how TC39 works versus I would say all the other technical committees. Voting is typically something that we do more at general assembly, which it’s only the ordinary members. Within TCs as MLS said, it’s done by consensus. Mind you, other TCs are much smaller and are much—in a much frequency having to find this point of consensus because of the—when they—the finalizing the standards. This is different for TC39. I am always trying to be pragmatic and ensure that the work that every technical committee does is bringing value. I do think as we think more and more about this topic of consensus is bringing up—it’s becoming tricky. I also understand WH’s comment. You don’t want two-class citizens. SYG, I don’t have a clear answer and I appreciate I may have the chance to give a much clearer answer at the next plenary in person. And I may go and think more deeper about this in a broader way of TC39. And I’m sorry for that. I know it’s beyond the agenda today. But some points brought up today, have touched some very important points of our rules. + +MLS: So let me see if I—like I say, I can’t see the queue because I am displaying full screen. Let me see if what I say now is acceptable: that we need a second, either delegate to withhold, or to second as it were or support a sole dissenter; and that that can’t be from the same ECMA member company. Is that—is this taking that—those statements together, that an acceptable change to our policy? + +WH: Who are you asking? + +MLS: I am asking the committee — + +CDA: I just wanted to respond on that particular aspect and some of those other aspects, and ECMA rules, or what invited experts can and cannot do. I think the details like that are important. And especially relevant if there’s going to be such a significant process change. But I think that we’re putting the cart before the horse a little bit there because I don’t think that those particular details are going to move the needle on whether this committee wants to resolve the higher level process change to begin with. So with that, I would like to keep moving through the queue. + +DE: So briefly, we have been operating at—you could call it a superposition of multiple different possible policies. Different chairs in the chair group, even, have different opinions whether invited experts can block. And when invited experts do block, then it’s ambiguous whether the block is real and the—it’s getting blocked or whether maybe the person who blocks is just voluntarily—the proposal champion is making different proposal back because they got strong feedback. I’ve been telling some of the people in the chair group privately for a while, this should be made more unambiguous, but it’s politically fraught as we are seeing now. I think overall, ECMA does—TC39 does follow ECMA rules. And I don’t see any mismatches. ECMA has the only—ECMA has a voting procedure that TCs can use, but most TCs don’t use it and we are similar. If there’s some other mismatch with the rules we should definitely get it changed in the rules. We have already gotten several changes made in the ECMA rules to accommodate TC39. It’s straightforward to accommodate ECMA changes, it’s a simple majority of the general assembly. And you know, as president of that general assembly, I am happy to help you get a new rule change through our process. + +JHD: Yeah. I mean, so separate from the ambiguity, I think it’s important that invited experts and delegates are afforded equal rights. But ecma exists to serve committees. If its bylaws are not serving the committee, then they must be changed. And we should pursue that if it turns out there is a conflict, which it doesn’t seem like there is. But I wanted to state that. We aren’t here for ECMA - ECMA is here for us, and all the other committees. + +DLM: I wanted to second what CDA said. We can go down a deep rabbit hole talking about invited experts and it’s important we go back and talk about the overall proposed change to the process. + +MM: So first of all, let me just make it unambiguous that I object to this overall thing. But I am very glad to see that what is being asked for has been whittled down substantially from the thing that I objected to much more strongly. In particular, the fact that the supporter does not need to be objecting, leads to support our something, I wanted to clarify, and MLS when you raised that, recited me as the suggestor, I am going to clarify the suggestion on that which is that the—MLS started his proposal, maybe it’s in the previous slide, simply saying there’s general agreement that there’s a problem with the lone objector. What there’s not general agreement on is that there’s any cure for that, that is worse than the disease. A sole objector, together with the assumption that members are working in good faith, I don’t think is a problem. The danger is that there is a sole objector that everyone else suspects is not objecting in good faith. And, therefore, the thing that I was suggesting was that the supporter, if you will, is not so much supporting that the proposal should not proceed; is not seconding, I think those are both misleadings ways to put that, even if procedurally are correct. The way to put it is that someone else on the committee—and it’s fine to say not a community member, if we go forward with this suggestion—agrees that the objector is objecting in good faith. And as long as the objector is objecting in good faith, I think that deals with the only legitimate issue with the sole objector. And I would certainly object to anything stronger than that. + +MM: And like I said, I think that the main reason I think this whole direction is counterproductive is that under the current rules, we all get to work on the problem. And when there’s a sole objector in good faith, they’re normally objecting to the particular solution to the problem. They are usually not objecting to the idea of some underlying motivating problem being solved somehow. And I have seen this over and over again—which I think TC39 is brilliant at—is, let’s see if we can find some other way to solve the problem that overcomes the reason why the objector is objecting. And then we move forward. Any attempt to weaken that distracts from technical work and focus activity instead on political work. Can I get somebody else to support my objection? And that’s just counterproductive. I so object to these whole thing + +MLS: MM, I think both you and I have been suggest to people that have—that have objected in bad faith. + +MM: Yeah + +MLS: And we have seen it. + +MM: I agree. Let me respond immediately to that. + +MM: Every case where I have been blocked in bad faith—that I believe is in bad faith; obviously, there’s no objective test—has been by a browser-maker. And I see SYG has an item later on the queue about the implicit veto that browser-makers have anyway. If there’s no way to overcome that, there’s no way to have overcome the bad faith objections that I’ve been subject to. + +MLS: So the voices of support a sole withholder, I would like somebody to, if they are not willing to object themselves, to assert or to offer to the committee that they believe that the objection is in good faith. And about I do agree with you, that we want a collaborative process as we evolve the language. And you’re right, good faith is a subjective thing in most cases. Although, I think, there are been cases in the past where it’s pretty clear by a majority present that it was a bad faith. + +MM: Since I did mention I think I’ve been blocked in bad faith by browser makers, it’s not someone on the committee at this moment. I am not saying that to anybody here now. + +SFC: Yeah. So I largely agree with the perspective that MM is bringing here. And I just wanted to ask like, it seems like the real problem is, you know, a delegate acting in bad faith, by some definition of bad faith. And it seems to me like that’s a problem more for the code of conduct committee than anything else. Like, if there’s a delegate acting in bad faith, then, you know, we kind of have a process for handling that. + +MM: I did not bring the particular case to the code of conduct committee, and would not because I can’t imagine that would have been productive. + +MLS: Yeah. I agree with MM there. I believe that there’s been cases where I thought that there’s code of conduct violations, but I didn’t think it was worth reporting. We have seen in the past where—and it hasn’t been, I would say, in the last several years—but we have seen in the past where withholding consensus has caused somebody to stop attending. Whether they were the champion of something, or even if they were just a bystander. And I think we also obviously have people—we talked about this in Seattle—we have people who have more initiative to speak up, and there’s others that are more reticent. And we have to take into account if we want all voices to be involved in the technical discussion in the committee. + +SFC: Yeah. I mean, all I am saying there is that it feels like, you know, if the problem is really acting the bad faith, then maybe we should look more into that—I agree with what MM said, that’s a direction we should look more into. Handing it from that angle. + +MLS: And I agree with MM. I don’t want the cure to be worse than the problem. + +PFC: I would like to register my explicit disagreement with the assertion that the status quo doesn’t admit any politics. Either way you slice it, the process is a political process, whether you have sole dissenters or not. If you have sole dissenters, there’s an intense amount of politics around things like, which ECMA member is that dissenter from? How much soft power do they have in the committee? I agree with the goal of building a process that minimizes the politics and maximizes the technical discussions we can have. I just disagree that the status quo is that process. + +SYG: So MLS, I want to entertain this hypothetical to the extent that you would like: I wanted to talk about—if we all do agree that de factos kind of do exist by the browsers—let’s not bring individual technical stuff into it. Let’s just, for this hypothetical, say, that we somehow get a top-down direction for some other reason completely out of my control. Like this product blah, blah, therefore, we cannot agree to some particular proposal. It doesn’t have anything to do with the technical merits at all. It’s just like force some other constraints, it’s not shippable for us or something like that. And for this hypothetical, this is a problem only I have. Apple doesn’t have it, Mozilla doesn’t have it, other implementations don’t have it. In that world there is 0 technical reason in this hypothetical for any other implementor to support the veto. It’s not technical. It’s from top down. Given that, if we can’t have that veto, we are still going to go into a world where the feature might be non-interoperable, because for external constraints I can’t ship the feature. How do we address that failure mode? + +MLS: So I think if you’re going to act in good faith, I think you would let the committee know that that is the issue. That you can’t share— + +SYG: [inaudible] + +MLS: Without revealing any internal information, but can say that we can’t ship this with whatever justification you can provide, then the committee knows that. And the committee can respond to that. Yeah. Various implementers of various technologies, they have what they see their market as, and they do or do not agree to certain changes in standards. But communication is the most important thing here: “that is why we are not supporting this, and this is what we would support”. + +SYG: Typically, I think hypotheticals like I brought up will be exceedingly rare, and I am supportive of this change. But there are some, you know, some new edge cases that may arise that take up process discussion that you want to point out. That’s all. + +MLS: Okay. + +WH: I am concerned that we’re focussing too much on folks objecting to things in bad faith and we’re throwing the baby out with the bath water. There are a lot of scenarios which arise much more frequently. Those include proposals which simply haven’t met the entrance criteria for the stage they are going for, or bugs which have been identified in proposals, which should be fixed before advancing. And this change would not be helpful in those situations. I think the reasons for not advancing at a particular meeting are more important than how many delegates state those reasons. + +MLS: WH, wouldn’t you say that, for example, if there’s a bug found or, you know, don’t have enough reviews done, that that would be easy to get a second person to agree to block? And it’s clear, the reasons for withholding consensus would be stated and recorded. And those could be easily overcome in that case. Also, with a bug, if the bug is addressed in the spec or algorithm or whatever, if there’s a bug, other people could see that there’s a bug, if it’s pointed out, they would support withholding consensus, and that bug would be either addressed. Or if it’s a fatal flaw, I think that would be able to be shown to the champions. + +WH: That has not been my experience. Typically what happens is, somebody identifies a bug. And the other delegates are not really familiar with it. They need to think about it. There is no time left in the timebox to explain the bug. No, you would not get support from other delegates for that. This change is counterproductive in such situations. And it’s also unnecessary, since in that situation nobody is trying to actually block something from getting into the standard. It’s just not ready at that meeting. + +MLS: I wouldn’t say that’s true. I think things have been blocked with a desire never to bring them in the standard. + +WH: You misunderstood me. I am talking about the more common situations in which the discussion identifies a problem, and nobody has had time to work on fixing the problem yet. + +MLS: But again, I think that is something that should be able to be—the others in the room can be made aware of that, and it’s not a huge amount of convincing them to also support blocking. And that blocking would be considered temporary. + +WH: This asks people to block based on things they don’t understand. I am very uncomfortable with that. + +CDA: All right. Thanks, WH. That’s it for the queue. + +MLS: So I sense that we’re not willing to move with even part of this, which is we would have somebody who would support a sole withholder? + +MM: That’s correct. I am not willing to—I think that we’re fixing a non-problem, and even a step in this direction is worse than the disease. + +MLS: Okay. + +CDA: Thank you. If you would like, MLS, I mean, you could formally ask for consensus for your proposed change. But if I am a betting man, it doesn’t sound like it’s— + +MLS: I don’t think I need to ask that question, because I think I already know the answer. + +CDA: Okay. + +MLS: MM and WH’s last two comments were sufficient to convince me of that. But I think that other comments that were made during this discussion show this is a problem that does need to be addressed. + +SYG: To MM, we heard a direct disagreement to your understanding of the status quo. I wonder if you have any thoughts on how your interpretation of how political the work required is in TC39, your view is at least not universally shared. + +MM: Okay. Any time you get human beings together under any circumstances at all, there are some politics. I don’t disagree that in the status quo there’s some politics. But I also agree with the point made at the same time, that we shouldn’t do anything to amplify the politics, at the expense of technical points. And any step in the direction that MLS is proposing amplifies politics and diminishes good faith, technical involvement. + +SYG: Can you explain the thought process that makes you think that? Compared to the current way, which as I agree with more of what PFC had said, one way where the single veto, or at least the threat of single veto, has turned extremely political is it focuses all the engagement on either heading off an anticipating kind of no repeat folks who would like to block, or reactively dealing with it after the fact. It concentrates a lot of procedural and political power into the hands of those folks. And that’s where I see a lot of political work—if you are not involved in a particular proposal—it changes from proposal to proposal is what I am saying. It’s not a constant thing that is always happening in committee at large. So I think it’s very disproportionate, and some people get exposed to it a lot worse than others. Especially those who need to have some involvement in every single proposal. So I would like to understand it in comparison with that. How does MLS’s proposal make it worse? + +MM: Okay. So, first of all, with regard to those issues, I am glad this slide is on the screen. MLS’s check mark, the status quo, are really essential to making the current process as reasonable as possible, which is that the objector has to support technical engagement, has to make their reasons clear; and has to engage with the delegates to see if there’s a way forward with the purpose of the proposal that meets the objector’s objections. I think all of that is great, and I think we have been doing that. And beyond that, I frankly did not understand the question. + +MLS: So MM, let me add that I think we do that almost all the time. There are times when we don’t do that. And that gets to the political side. + +MM: Okay. When you say we should do the things that are political— + +MLS: Yes. + +MM: It’s in our—how we work of the the check mark things. Explicit. + +MLS: I had I would have to look, but yeah, I think it’s the general ethos of the committee + +MM: How do we write down the check marks of how we work that is not more damaging than the status quo? + +SYG: I mean, I think the plus sign here is that the proposal to make it better. Right? + +MLS: Yeah. + +MM: I don’t understand why you think that would make it better + +SYG: Because I read this as— + +MM: What is the problem with the status quo, if you explain the problem with the status quo such that the plus sign thing would address that problem without introducing worse problems? + +MLS: Because one person could have a non-technical, political reason that they want to block something. It’s happened in the past, we have seen it; and there’s no technical resolution that will allow something to move forward. + +MM: Okay. + +MLS: If you have a second person added to that, whether they support it or also withhold consensus for maybe the same or different reason, and they articulate it, you reduce the likelihood it’s done for non-technical reasons in my mind, especially if they are from different member companies + +SYG: The way I phrase it is, if it’s a good technical reason to object, you should be at least able to convince one other person on the technical objection. If it is not a technical objection, then you have a lower likelihood of being able to convince someone else to also see your point of view, because it’s not actually a technical objection. + +MLS: This is what I have been saying for over a year + +MM: Let me come back to a point that certainly always prominent in my head when we discussed this. I did not understand MLS’s answer to SYG’s earlier question, which is about the unilateral browser veto as reality on the ground. Since a browser-maker can unilaterally block, because the committee would do a disservice to everyone to proceed forward in putting something in the standard that a browser maker announced they are not going to implement. So I did not understand—that to me is a primary issue here. Any attempt to weaken the ability for anybody other than that browser to block, without weakening the ability of the browser to block, which is impossible, simply disempowers the community compared to the browser makers. + +SYG: You are missing the converse of this. Browsers don’t only have a de facto veto, but a de facto antiveto. We can unilaterally ship things as well. + +MM: Yeah. That’s happened. And I don’t— + +SYG: There is no weakening here + +MM: That’s happened. That's the reality that I agree that there’s—I mean, in general, one of the things that I think is right about the whole TC39 phenomenon in general, most standards groups, we have no enforcement power. If we move forward in a way that is at odds with what prominent JavaScript engine implementers agree with each other to implement or not implement, we make ourselves irrelevant. So yes, the browser-makers do have the unilateral ability to implement something anyway. And we have seen that kind of thing happen, in fact. I don’t understand what the implication of that is. I haven’t played that out. + +SYG: Your argument was that MLS’s proposed change here would weaken every other non-browser delegate’s withholding power. As I understood your argument, because browsers have this unilateral single veto power, de facto, then in the process, we should also enshrine and give every other non-browser delegate the same power, to have single veto. Is that a fair characterization of your argument, first of all? + +MM: Yes. I think I see where you are going. So let me get that. This goes back to—let me play some more of the implications of disempowering the committee and what it means for the committee not to have any enforcement power. + +CDA: MM, sorry to interrupt, we just have a couple of minutes left before the break. Please continue. + +MM: So is there other things on the queue? I can’t see. + +CDA: No. + +MM: Okay. So the browsers got together at one point because of disagreements with W3C to form WHATWG. And in so doing, made it clear that they are going to proceed forward with agreement among the browser vendors, leaving the non-vendor voices that were in W3C, rather than on WHATWG, powerless. And that was publicly visible, as it should have been. If the power that TC39 as a standards process has, comes from the fact that the engine-makers and the community are both on it, and therefore the browsers who can certainly go off and do another WHATWG, or in fact go to WHATWG to decide among themselves, is to just make it public, that they are making a decision just among the browsers, leaving the community out of it. And that should be costly. That should be costly in the public visibility, that the browser-makers have decided to do that. + +SYG: Sorry. And that is an argument for not accepting MLS’s proposed change here? I think I am missing a few steps + +MLS: I’m not sure how MM’s comments are tied together. + +SYG: Yeah. + +DE: So TC39 works well today because we collectively do this technical development in alignment within the committee. If we stopped doing things, then, you know, things would be done in other places. But we can preserve our position and our ability to contribute to the web platform by continuing to operate effectively and making good designs and coordinate them. + +### Speaker's Summary of Key Points & Conclusion + +- Some delegates were in favor of these changes or something similar. +- It is thought by some on the committee that going forward with this process change would be worse than the status quo +- It makes sense to continue discussing our consensus process at future plenaries. + +## Stage 1 update for decimal & measure: Amounts + +Presenter: Jesse Alama (JMN) + +- [proposal](https://github.com/tc39/proposal-decimal) +- [slides](https://notes.igalia.com/p/tc39-2025-04-decimal-intl-integration#/) + +JMN: Okay. Good afternoon, good morning, good evening, . We are talking about the decimal proposal. There is a measure proposal in here. This is going to come up in the presentation, the decimal and measure proposals are kind of being at least in part developed side by side at the moment. My colleague, BAN is on sick leave and about working on the measure proposal for some time. You remember it come be up in November in plenary. He’s with us in spirit today as he has been helping with decimal and progress on the measure proposal + +JMN: The status quo is that we have settled on a lot of the semantics in the API for decimal. That’s not new. We have settled on that for quite some time now. In the meantime, the internationalization side of thing is a work in progress. What I am here to tell you about today is some of the progress we have made there. + +JMN: We think we have settled on a solution to many of the problems there. This presentation is a bit awkward because I am going to deliberately not name a class that I propose to add to decimal. You see, I am calling it `Decimal.Something`. It’s a bit tongue-in-cheek. But I hope you understand my intention. + +JMN: The point is that the name is important. And we don’t quite know what name we should use yet. The name is TBD. Maybe things like `Decimal.Amount` or `Decimal.WithPrecision` would be good. The name is a bit up in the air. We’re welcome any suggestions that you might have, but I hope we can avoid too much bikeshedding about that. If I say “amount”, that’s usually what I mean. But in fact, that’s not really the official name here. Think of `Decimal.Something` or placeholder. + +JMN: And this idea of a `Decimal.Something` or `Decimal.Amount` is a small class that really rounds out internationalization. This class can unblock us with some issues there. + +JMN: And looking forward — or looking sideways, depending how you think about it — if this class is accepted, then this is something that the measure proposal might also use. It’s something that the measure proposal might add fields to, to store things like a unit or currency indicator. + +JMN: Just to recap what the issue is with decimal: so for decimal, I think the story is clear. We are interested in numbers. And when we say "numbers", in that context, we settled on the notion of a point on the number line or mathematical value. And that has a number of use cases, we have settled on IEEE Decimal128 for that. That’s all kind of old. But for the internationalization story, when we think of decimal, there’s a bit more to the story there. We need to have a kind of a concept of a number that somehow knows its own precision. So think of it as something like a number plus a precision or a sequence of digits of a certain form, if you like. + +JMN: You might say, well, what is going on? Why can’t we use, like, JS numbers for this? That’s kind of no big deal. Right? The problem is that using numbers currently with Intl is error prone, especially with NumberFormat and PluralRules and the mixture of the two, the for example type, if we do this, with decimal, shouldn’t be created that has the same problems. And these needs for internationalization exist in parallel to the needs that exact decimal values, that is, mathematical values essentially, currently meet. So we have a kind of version of Decimal128. We call that Decimal. We think we understand the use cases and the needs there. But for internationalization we need a bit more. And that’s what we are here to tell you about. + +[slide 4] + +JMN: The idea has been bouncing around for quite some time in various forms in the last plenary we talked about the overlap between the Decimal and Measure proposals. And in fact, as somewhat radical suggestion we even put on the table the idea of merging the proposals. Thinking that, well, their use cases and the needs overlap, to some extent. Maybe that overlap is large enough to warrant, you know, thinking of this as one proposal. But the consensus was that they should remain separate. The use cases are too different. They might overlap, but there’s a non-overlap here that’s big. So we keep them separate. So one ever the proposals or one of the suggestions that we had there for talking about these—the intersection between measure and decimal was to have something like 3 classes. Something like decimal, which we already had. Some kind of number with precision. And measure. But that didn’t get much traction either. And so coming out of the last plenary, we were struck. We had the internationalization use cases, but we kind of didn’t have a path forward. + +[slide 5] + +JMN: But what I am about to tell you about today is something I think might be way forward or thinking about. And the idea here is to try to take more seriously the idea of measure and decimal are just separate proposals. And the thinking here is that if we want to talk about units or currency codes, that naturally suggests a number of issues that can be separated from just talking about the underlying number. So decimal the proposal is all about numbers, with or without precision. So this discussion of units, although related, feels like a kind of foreign object added to the discussion. It’s maybe interesting to think about, but it’s not really about just numbers by themselves. Which already have their own package of problems and issues. + +JMN: So the thinking is that what we have in mind here with this `Decimal.Something` or Amount, is that the measure proposal could, then, take that ball and roll with it. So rather than introducing a new class, what we have in mind is that the `Decimal.Something` could—we could expand that with the measure proposal. + +[slide 6] + +JMN: The API for this thing is very small. It’s very thin. It’s deliberately kept quite minimal. We conduct the data using convenience functions on `Decimal.prototype` . Maybe we should allow constructors—construction using new too. Maybe there should be a static Temporal style `.from` method. That’s a bit open for discussion. Interesting questions to think about there. There’s an accessor for an underlying decimal and the precision, of course. Just a toString. Critically for us is that there’s no arithmetic here. The thinking is that we already have arithmetic sitting in decimal. It would be a bit awkward to reproduce that somehow in `Decimal.Something` . And besides we have discussed many times that this issue of propagating the precision of numbers using arithmetic is a bit odd in IEEE754. And we know there are other ways to do it, so we just skip it then, and say there’s no arithmetic on these things. The main thing is that we have some kind of integration with NumberFormat and PluralRules. And again, just like decimal, our `Decimal.Something` would be immutable. + +[slide 7] + +JMN: So again, so I have talked about how we have just a bit after tongue-in-cheek placeholder names here. These are also place holders, but the thinking; if I have a decimal, then I can try to create one of these `Decimal.Somethings` using some kind of method that attributes or just imputes some kind of precision to the thing. So, for instance, if I have `new Decimal("42.56")`, and I say, let’s consider that number as a number with two significant digits. So then we essentially are talking about 42 there. Or I can say, take the same number and consider with 5 fractional digits. Let’s say I know out of band that whatever number I have, has some kind of precision of 5 fractionalDigits. I impute that number. I talk about 42.56000. The names here are awkward, I admit. They’re place holders. Just to get your creative juices flowing. + +[slide 8] + +JMN: Again, so we think that we have a bit of—made a bit of progress with the internationalization side of things. There’s been some discussion in the champions call, that happens every couple of weeks. You should see it in the TC39 calendar. Also a channel for this, if you would like to join the discussion. The current thinking is that PluralRules shouldn’t handle bare Decimal values. And there was also a discussion about whether NumberFormat should continue to handle bare Decimal, which it did in earlier iterations of this proposal. But we also thought about possibly banning bare decimals from NumberFormat. + +DLM: I’m sorry. JMN. There’s two clarifying questions in the queue. + +CDA: They aren’t meant to be asked immediately, but WH, did you… ? + +WH: Yes. On the previous slide—yeah that one—you said that the first example produces 42. Shouldn’t that be 43? + +JMN That one would produce 43, because there would be rounding. We look at the 5 to do the rounding and use round to help you. Sorry about that. + +WH: Okay. + +JMN: Does that clarify? + +WH: Yeah. So this then raises the issue of rounding modes and how to specify them. But that’s a different ball of wax. + +MM: Okay. I have a question. So there’s two different methods here. The meaning of each of the methods is clear. But in terms of the representation of precision, with object that the methods produce, are you thinking of two different kinds of representation of precision or do both of these somehow produce the same kind of representation for precision? + +JMN: Very good question. So the thinking at the moment is that there’s just one notion of representation. In the current discussions that, we are working with significant digits. That’s the one and only underlying notion. Then if you want to use fractional digits there’s a calculation to convert that. Does that answer your question + +MM: Yes, it does. Thank you. + +JMN: So we were talking about how this thing, this amount, this `Decimal.Something` fits into the internationalization picture. And the thinking is that wherever we used to have Decimals sitting in Intl, namely in PluralRules and NumberFormat, they should be banned. Maybe it’s a bit of a discussion, whether some parameters should also be mandatory. In general the thinking is, banning but handling these with `Decimal.Something`s instead. So the idea is that for PluralRules, NumberFormat, `Decimal.Something` is going to be the thing that contains the information that is likely needed in the internationalization use cases. + +[slide 9] + +JMN: We also have a bit after story here about how this would fit in with the measure proposal. I said this was going to be about decimal, but that’s like 95% true. The discussion would be incomplete if we didn’t say something about measure. And the current thinking is the measure proposal can be slotted in later. With some kind of unit or currency attached to an amount. So let’s look at a bit of code. If we have some decimal, 5.613. We can attribute some kind of unit to that. And then we can, perhaps, convert that to a amount, and then attribute to a unit later. You can see looking carefully at the number there, before kilograms, there’s a slight difference there. Again, we’re still bikeshedding a lot about the names. And the exact API shape. But we think that something like this should be possible. + +JMN: So what we are thinking here, this is—an assumption there that decimal happens before measure. Or like at the same time. But measure doesn’t actually need decimal. But so if decimal doesn’t happen, we can still work on measure. + +[slide 10] + +JMN: That’s it. This is just a short update about our current thinking. So for those of you who are worried about our suggestion of merging the measure and decimal proposal, we’re not doing that. They remain separate. There’s some spec text available, if you would like to take a bit of look. And in our view, if this `Decimal.Something` or amount is something that looks good to committee or seems reasonable, then I think we are in a pretty good position to ask for Stage 2 for decimal at the next plenary. And that’s it. I am very interested to hear what you have to say. I will take a look at the queue. + +WH: I am a bit confused about some of the points raised here. You said that there wasn’t much interest in having 3 classes, but what I see here is 3 classes. The only thing that changed is that the name of one of the classes has moved to be on the Decimal object, rather than being an independent class. + +JMN: Yeah. I understand the concern. I think the current thinking is to lean towards having the two classes, the idea being that— + +WH: Which two classes? + +JMN: The Decimal and then the `Decimal.Amount` . The idea is that we would add some kind of unit and possibly other methods later in the measure proposal. Or we can also think again about three-class solution. That naturally arises in this case, as well: in other words, I am prompting us to rethink that also. + +WH: Okay. I have some observations here. Precision is not necessarily specific to Decimal. You could have Numbers or other types with precision also, so having Amount use Decimal might be foreclosing options here which we don’t necessarily want to foreclose. + +WH: You also mentioned that some of the internationalization methods might throw if passed a Decimal instead of an Amount. How do those methods behave when passed a Number—do they also throw if you pass Numbers? + +JMN: Numbers are fine if you pass them in. + +WH: So they would accept Numbers but not Decimals? + +JMN: Yeah. You are right. I mean, there is a bit of ambiguity there. We could accept the Decimals. But the thinking is that this might open up the door to some kind of errors and footguns that number currently allows. So I mean, yes, it’s allowed to use numbers. But the thinking is, this is a chance to perhaps fix some issues or prevent some problems that would come up. But if we can see a clear need for allowing Decimals, that’s also fine. It’s just something we are leaning towards right now + +WH: It’s not clear to me that providing bare Numbers or bare Decimal is always a bug. I can think of many cases where it just makes sense. There are cases where you might want to specify precision, and other cases where you don’t. + +WH: The other concern I have goes back to having the three classes, which is that, once you add precision and units, it’s more ergonomic to have the operations of setting units and setting precision commute with each other. By separating the classes, you make these non-commutative. You must set precision before you set the units, rather than the other way around. Things become awkward + +JMN: Yeah. That’s an interesting consideration. I am not sure I have a solution off the top of my head, except things like allowing some kind of, like, options bag as an argument where both can be specified. But yeah, you are absolutely right. We should perhaps think about that. + +WH: Okay. + +NRO: Yeah. So when WH was asking about Numbers, and asking if it’s weird that Numbers work with intl and bare Decimals do not, the passing Numbers to the various single classes, they can cause problems because you need to make sure to pass the same precision options to various separate Intl classes and that’s—it’s just very easy to miss that if the precision doesn’t come with number. And also, it’s—depending on your locale, what locale you are testing with, is it making a mistake because it might map in one locale and not in another. Or might be common in one locale and not the other. This is a long-standing issue with `Intl.NumberFormat` and PluralRules. Which is why we need—in the numerics calls we thinking of just a number type of number, let’s make it difficult to make a mistake. Maybe there could be way to say, oh, here. I actually am sure I am passing a decimal and I do not care about precision and like we should direct people towards doing the safe thing. Which is the opposite of numbers right now. + +WH: Can I reply to that? + +NRO: Yes + +WH: I am not sure I believe the claim providing a precision is safer. I can think of plenty of instances where you don’t know anything about the Number or Decimal you are providing, and adding a precision can make things worse. It can change the value of it. It’s unclear what happens if you escape into exponential notation. + +NRO: Yeah. You are right. I think we should still make it explicit that you don’t want to find the decision rather than making it the easy thing to do. + +WH: Yeah. I want there to be a simple way of _not_ specifying a precision, if that is something that makes sense for that application. + +JMN: If I may reply briefly to that. We still have things like toString and toPrecision and stuff like that. So it is possible to have a decimal and you just kind of format it without any acknowledge of the digits. I guess the question is: does Intl also want to be totally open to decimals? And I guess that’s still something we need to resolve. + +NRO: I think I was the only one pushing for three classes. Just because it felt like a better solution to me, I guess. It would have been convinced multiple times that it’s absolutely fine to have two classes. And this discussion keeps coming back. But nobody said and support of three separate classes. + +SFC: My comment is that the two-class solution allows the phases to be commutative as WH mentioned. It also—a decimal without the precision amount is like—has the precision being the number of significant digits and the decimal. If you specify the unit on a decimal without specifying precision, you’re inheriting the precision of the decimal. Of the bare decimal. So I do think that’s a well-defined operation. + +EAO: I am echoing some of what SFC just said. But maybe from a different angle, and the way I see it, I understand one of the main reasons for Decimal to be representing better values that numbers that we are getting from external sources. And those values are, then, more exact with decimal and therefore have a precision that we don’t need to define. These values really ought to be formattable without needing to be specially wrapped and determined what the precision is. + +JMN: May I reply to that quickly? I guess—are you also going with WH's suggestion? The idea that Intl should accept bare decimals? Do I understand you correctly? + +EAO: Yes. I disagree with the reasons for not supporting bare Decimal in `Intl.NumberFormat` and `Intl.PluralRules` . + +SFC: Yeah. I think that—we should have that—I mean, this is a discussion we should have in TG2, once we get to that point. But I think that, you know, there’s definitely valid arguments to be made that bare decimals are formattable finite values by themselves. Intl operator could—that’s a discussion we should have. Whether that’s a natural place to draw the line. + +MM: Okay. So I think what I am about to say overlaps with a lot of what has already been said. I won’t respond on the rationale. The position I find attractive is the two-class with there being a Decimal class and a measure class, for the sake of both immutability and commutability, both the precision portion of a measure and the units portion of measure are each optional. And therefore, you know, they could both be optional. I don’t want to take it the same as the underlying number, but the key thing; there’s nothing about any of this machinery that should be specific to decimal. Decimal is just a number without precision. And then the thing that adds precision for display purposes to a number should simply apply to all numbers, including floating point numbers and BigInt. One part of my rationale is that from our point of view, being a participant in the blockchain crypto ecosystem, we would certainly want to represent `numbers.withUnits` , where the units could be cryptocurrency, but we would never use decimal because even at a 128 bit mantissa, we would not want to take a chance on the loss of precision. You would do what is already the convention in crypto, which is you just take the smallest quantum for any currency, it could be incredibly small like the Satoshi, and then we just use BigInts and having BigInts plus display information plus units for what currency is the only thing that we would use for that use case. + +JMN: I see that NRO has a response to this. + +NRO: Yeah. So when this first came up, like, maybe 6 or 8 months ago, it was like I didn’t discuss every single number type. Like, there was a—it was a, like, 2 dot significant digits and we give something similar for that. The feedback was that it was just like a lot of stuff. And I think we can—like, this presentation showing the constructor, would potentially leave the door open for something like `BigInt.Something`, if that’s,like, motivated. + +MM: I think—I think they mentioned any of the specific to decimal is just not motivated. I would object to leaving the extension to other number types on the table, while something specific to decimal proceeds forward. + +NRO: Okay. I think there is value in having, like, the class to be charged stronger type, like you have the object that contains, like, a decimal with something. So you don’t have to look into the object to then figure out what type it is. Like, I can think about how to use this with TypeScript. + +MM: TypeScript has parameterized types + +CDA: SFC? + +NRO: Yeah. So I have [inaudible] remove it, because it’s what Shane said. I was going to say the second point and reply to this. + +SFC: Yeah. We looked at the polymorphic approach a while ago. The decimal backed amount approach, I asked, you know, FYT and others about the implementation difficulty of this. Having a single amount class that we know what the backing amount is, it means that that class has properties. That is available, that polymorphic amount wouldn’t be able to have. So when I say polymorphic amount, I mean an amount that has a numeric field that can be many different types. It means that basically, every interaction with that type needs to then have branching code. Just because it has to have different behavior based on the underlying numeric type, it likely will use more memory because you have discriminants and such. Another advantage of a decimal backed amount is that the precision is free to represent because Decimal128 already represents the number of signature digits. It doesn’t require adding any slots which is another nice advantage. Yeah. I guess that’s what I commented. + +EAO: Replying and asking maybe a clarifying question for MM here: is that given that your interest and needs are for working with numbers that have higher precision than what decimal can provide, that you’re maybe natively working with BigInts, and then but I would like to format these and presumably when formatting these, you will you would like not be formatting an integer but a number with a fraction presumably. If I understand the case you are register there, do I understand, effectively you are saying, that you would need to be able to represent the number, for instance, as a numeric string or you would need a dividing scaling factor to be applicable somewhere in order for this thing to work for your purposes? + +MM: So the answer is, yes. And I will agree that that weakens my case, that one notion of measure will cover my use case. The point that I was making, though, is that the combination of units together with an underlying number, there’s certainly nothing about that that is specific to decimal. And also, the general notion of precision combined with some underlying number, is also a notion that is not specific to decimal, even if our particular use case for BigInts takes us in a bit of a different direction. Certainly for numbers the notion of a way to associate precision for purposes of display makes perfect sense. And with regard to the representational economy, I think that’s exactly the kind of implementation detail that programming language implementations generally strive to hide from users, and what should be exposed to users, especially for the language at the level of JavaScript, rather than the level of C, you could have a measure class, that internally took advantage behind the scenes in a given implementation if it shows to do that of a more economical representation when the underlying number was decimal. But I don’t see any reason to make that visible to the user. And certainly, some implementations would choose not to do that, and I think they should be welcome not to do that. + +MM: One more point on this. Which is, the ton that you are doing, which is using the precision that is inside the decimal—the non-normalized Decimal128 representation, you are using that in a semantics violating pun because the actual IEEE semantics of that implicit precision is number of trailing zeros, not number of trailing digits or number of significant digits. By using it for display purposes for number of significant digits, or number of trailing digits, you are making use of a representation whose documented purpose is something else. + +CDA: REK on the queue: "plus 1 to the BigInt and precision in the context of cryptocurrency" + +DE: Yeah. I am not sure about the cryptocurrency use case. Isn’t that one with fractionalDigits. That corresponds with the—does anyone have use cases for other kinds? I could picture this number precision use case, but I want to question that this is independent from decimal. Precision is base-10 precision of—especially of fractions. It does go in this positive direction as well. I would want to dispute the comment that Mark made about this being an invalid pun of the IEEE data model. We convert to the quanta at that in the IEEE and that’s a reasonable representation. Precision is a base-10-basic consent and the whole point of the proposal is to encourage you to move away from representing numbers that are logically containing these base-10 fractional parts and user precision units. In a sense it is analogous, if you are using number for that, you are going to be broken. + +MM: I’m sorry. I don’t understand that comment at all. People use binary floating point with rounded decimal displays to different numbers of significance all the time. That is a use of floating point numbers that we should, in general, try to discourage and if you are going to display a number in decimal—with decimal digits at all, you shouldn’t be using floating point. That seems like two greater— + +DE: To a large extent, yeah. Logically what you should be doing is kind of two phases. One, round to an appropriate decimal and display the decimal. It’s okay that we have all this tradition of those operations being elite and grouped. But I think the whole point of the decimal proposal is to focus on giving accuracy and reliability to the common case. Isn’t the result of sine or something or other. + +MM: For the IEEE definition all about trailing zeros, I understand that. That argument. But for very good reasons that’s not what we’re doing here. We are interpreting its numbers, significant digits and you draw significant digits then you are talking about—you’re seeing an approximation anyway. I don’t see why a decimal display approximation of a floating point number is less sound than a decimal display approximation of a decimal number. + +DE: So maybe WH can clarify more about the IEEE alignment. I don’t think it represents a number of trailing zeroes in IEEE either. I think significant digits, there’s—it’s interchangeable with respect to particular decimal with IEEE, you know, quanta concepts. We could allow this, but I just—it doesn’t seem motivated given the main use case was this crypto thing, which is not what the proposal means. And— + +MM: Sorry. I withdraw the crypto, as anything but illustrative and agree with the ways it doesn’t fit with what I was saying. I certainly don’t withdraw floating point numbers. There’s existing software that does this with float be point numbers and I can consider that software to be correct. I do not want to retroactively declare that software to be incorrect. + +DE: We have APIs for dealing with that, NumberFormat takes various precision parameters. And you can format numbers this way. We are just—even though something is logically sound and has a well-defined meaning, when adding something to the standard library, we are making a judgment call. This thing is especially pertinent. And I think we’re allowed to make judgment calls that don’t correspond exactly to, like, is this a logically meaningful thing or not + +MM: You are certainly allowed to make such judgment calls. I am free to arrive at the opposite call on that call. + +DE: That's fair. Just to sum it up. The main use case I heard from you, when you are doing something you might today do with NumberFormat, providing the and giving a double, then having is that in one unit is a logically meaningful thing. And that’s that. + +MM: Yeah. And to flip it around: to the extent that we’re willing to live with NumberFormat providing the display precision, why not use the NumberFormat to provide the display precision for decimal values as well? + +DE: Well, that’s—yeah. SFC has made that argument as well, that often this is logically wrapped up in the human legible like human interpreting meaningful decimals in a way that— + +MM: Okay. So once what is being displayed is an approximation of at underlying number, then I don’t see the distinction between the case being made for doing it with decimal versus floating point numbers. + +DE: Okay. I’ve been—I will leave it at that. I think I made the argument + +WH: There are a few things that I think are incorrect or we have been neglecting. There was the claim made that we could optimize the `Amount` class to use the IEEE quantum precision. That doesn’t really work because the precision varies depending on the amount of the number you have. So if we want simple semantics for what the precision value could be, then we must store the precision separately. Trying to fit it into an IEEE quantum is just premature optimization, which wouldn’t work anyway. + +WH: The other crucial thing that we’ve not even discussed here is where rounding takes place and how that rounding works. Rounding modes are important for a number of applications. I don’t understand in this proposal how that would be specified, and that makes a huge difference in what representations we can use in the `Amount` class. + +DE: Yeah. Could you work through an example of where this doesn’t line up. I am just trying to understand the terms of the IEEE logic. Why can't we use this quanta for the precision. It’s too complicated to figure out which would apply + +WH: For example, denormals. + +DE: Do you think you could talk through an example just so I could picture it better? + +WH: So for semantics I am imagining for precision, you can set the number of digits after the decimal point independently of the value you have. This is not true for IEEE quanta. We only have 10 minutes left. I don’t want to digress into explaining examples of that. + +CDA: Right. We have less than 10 minutes left and several items in the queue. + +EAO: I agree with WH that I don’t think that packaging in the precision into the IEEE754 representation. Separately, what I would like to note, the current `Intl.NumberFormat` supports formatting a string representation of a number with fractional digits with limits on the precision that go well beyond that of Decimal. So, for example, for the use case that MM was representing earlier, where there is a value with a precision greater than the precision allowed for by Decimal, that ought to still be formattable with a precision. This is currently supported by explicitly setting the precision in the NumberFormat constructor, but this would not be supported by the `Decimal.Something` or the `Decimal.Amount` that is proposed here, if that value that is based on Decimal rather than, for example, a string representation of a number. + +NRO: Yeah. It was said, like, IEEE talks about trailing zeros. We talk about digits not being an approximation. I disagree with that. You can convert between one and two. Like one or the other. You can look up Wikipedia, it actually talks all over the place about significant digits and trailing zeros. Because, like, they’re just interchangeable. Once you like deal with the data. + +NRO: And then I have a question. If you start with a floating point Number and say, this is actually to interpret it as if it was a base-10 number, with this amount of precision, is there any of those float64 numbers that cannot be represented as a Decimal number together with some precision? + +WH: The answer is yes. + +NRO: Okay. Thank you. + +SFC: Yeah. I was just—mostly echo what NRO said, which is that the—the quantum representation to cohort representation for precision is equivalent to pairing a normalized decimal with a number of significant digits between one and 34. If this is not a true statement, we can just discuss on Github some counterexamples. But as far as my understanding of how this works, like this is a true statement. Maybe there’s edge cases involving subnormals. But for most numbers, these two representations are equivalent to one another. + +SFC: I am also next on the queue again. I can also take this off-line to discuss with MM and WH. But the— + +MM: I see the question, "are MM and WH motivated by the other use cases not mentioned". I can just give you a quick answer, is that although the cryptocurrency case was a thing that senseized this to me and I have withdrawn it as more than that, the this—this is not motivated—my objection are not motivated by anything that still has anything to do with Agoric or anything I want to do with this, it’s that the non-orthoganality of what is proposed compared to the blatant orthogonality of the underlying concepts just offended me as a language designer and a lot of my feedback in general as me trying to uphold the quality of the language, whether it has anything to do with a particular use case I want to engage in or not. + +SFC: Yeah. I will just respond a little bit there, MM, which is that for—in terms of a decimal-specific abstraction here, for example, Decimal128 itself and most other programming languages that use Decimal128 are able to represent the number with precision with the quanta in a decimal representation. And it seems like there’s value in having a type in the language that is able to interoperate with the other platforms and systems that use Decimal128. And `decimal.amount` is the natural place to put that. A polymorphic amount is not a natural place to put that interoperability type. Because that’s very much, a very specific decimal functionality. + +WH: I would like to understand why we keep bringing up the IEEE754 representation of 'quantum'. I don’t see how it’s connected to anything we are doing here. A use case that doesn’t work is specifying precision of, let’s say, 15 digits after the decimal point and having that work for any number as the number—so I just don’t understand the motivation for trying to force this into the IEEE quantum model. As far as internationalization is concerned, it’s the number of digits you want to display after the decimal point. That could be arbitrary. That could be 40. It would be 15. + +NRO: Like, you could want to represent any precision, like, saying I have a number with 1000 significant digits. But in practice when it comes to show numbers to users, you don’t deal with that. Like, a number that’s, like—has more than 34 digits of precision, you are going to find some other way to explain that concept to the user, like, for example, splitting it into multiple subunits, like, you have based on hours and seconds and so on. But then a single very long number. And so like putting a limit on how much this precision could represent in practice, like, when it comes to Intl and showing the thing to users is not—is not a real limiting factor + +WH: It is. Like, even two decimal digits, reliably emitting two digits after the decimal point doesn’t work if a number is large enough. So far I have heard plenty of discussion about, you know, how we could work around the limitations of IEEE quantum, but I haven’t heard any reason why we should be using it in the first place, rather than storing the precision as a number that is independent of the value that’s being stored. I have yet to hear any motivation other than trying to save a byte or two. + +NRO: We don’t really have a use case. Like, this is personal or like not discussed in the—if you had anything to represent the list, the decision, and represent the list IEEE Decimal128 number, then we personally be fine with me. We heard the restriction makes sense for them because it makes it easier. + +SFC: I didn’t mean to have the discussion to go in the way of quanta. I brought that up as a way that implementations could choose to represent this in a more efficient way. + +WH: Yeah. I still don’t have a good answer to how you would print a bunch of numbers each with two decimal digits after the decimal point and have them line up. + +CDA: Okay. Thanks, everyone. We are past time. + +### Speaker's Summary of Key Points + +- We presented a new class that solves problems with Intl and decimal +- We suggested using this new class instead of bare decimals in Intl + +### Conclusion + +- There was some concern about the commutativity of the application of a unit and a precision +- We discussed problems about the representation, in Decimal128, of very large/precise numbers such as those arising in cryptocurrency. +- There were some concerns about our proposed “banning” of bare decimal values in Intl + +## Guidelines for Locale-Sensitive Testing in Test262 + +Presenter: Philip Chimento (PFC) + +- [slides](https://ptomato.name/talks/tc39-2025-04/#8) + +PFC: Hi, again, everybody. This is a topic that I gave in an informal presentation on TG2 a few months ago. And I thought it would be helpful to bring it here as well and get feedback. This is not a normative thing for the specification. It’s just a discussion of what kinds of tests are helpful to have for parts of the language that are locale dependent. So ILD is an abbreviation, it stands for things that are implementation and locale defined. This is ILD behavior in JavaScript. + +[slide 9] + +PFC: Here's an example. You use the toLocaleString method of Date and you pass some arguments to it. And you get back an answer that says, "in the afternoon". That is obviously dependent on language and culture. The spec text says about this, + +> Let _fv_ be a String value representing the day period of _tm_ in the form given by _f_; the String value depends upon the implementation and the effective locale of _dateTimeFormat_. + +PFC: So taken in the most literal way, the specification says that any string can come out of this code. Like even a series of 1,024 `X` characters concatenated together, or something like that. That will be legal, but we don’t want that. So, you know implementations make their own places and they largely agree on what should come out of here but that functionality is often expressed in third party libraries such as ICU4C and IC4X. + +[slide 10] + +PFC: I would argue that it is good for users of the web, when the ILD behaviour is stable and websites don’t break and suddenly produce different results. But I would also argue that it is good for the web when ILD behaviour is updated to reflect current cultural practices so that websites are localized in a way that users find comfortable. As an example of that the locale-dependent formats in data repositories like CLDR are often wrong because somebody in the past made an arbitrary guess as to how a locale represents the date and number and they guessed wrong, and somebody who actually has more knowledge of that complains and submits a change and the behaviour is updated. + +PFC: So I think ILD behaviour being stable and ILD behaviour being unstable is both good, and obviously diametric opposites. So that brings me to the more practical consideration of what do we do when we are testing this behaviour in test262? + +[slide 11] + +PFC: Obviously, if we stuck to this spec text and only tested literally what the spec text says, we could not make any assumptions about the behaviour because arbitrary strings can come out. That seems like it is certainly not very helpful for implementations and not good for users of the web. We do want test coverage of these APIs, and we do have existing test coverage of these APIs in test262. We will talk about what do we want out of that test coverage and what is helpful and should it be a goal to cover every locale and option for every API? My opinion is no. And I think if you do that, after a certain point, you reach diminishing returns and you are not testing the JavaScript implementation with the ILD test anymore—you're just testing the underlying data source. + +[slide 12] + +PFC: We do have tests in test262 for this sort of behaviour, and there are two strategies that are often used that I consider not ideal. One is called 'golden output' and the other one I will call 'mini-implementation'. Golden output is kind of testing jargon and means comparing the output of the method under test against known-good output. I think this is undesirable in test262, because what is the golden output? It varies between implementations. Each major browser has their own human interface guidelines where they amend some data in these data sources in CLDR and ICU. Golden output will also vary over time as they update the data in the data sources. All of these variations are permitted by the specification. We don’t want to ban variation, but make sure that the variations are limited to things that make sense to vary. And then finally, if you build in golden output that means that the test can only reasonably be run by an implementation using a particular version of CLDR. If you are using another version or another data source altogether, forget it. + +[slide 14] + +PFC: The other strategy that is often used I will call 'mini-implementation', and you can see this in some of the files in the harness directory of test262. It is basically writing a polyfill for part of the spec in the test code, and then comparing what that polyfill outputs to what the implementation outputs for the method under test. I think this is undesirable in test262, because it makes it difficult to understand what is being tested and it is unclear when the test fails, is that a problem with the implementation or is it a problem with the polyfill? + +[slide 15] + +PFC: That was a bunch of slides on what not to do. What should we do instead actually? Here are some ideas that I have collected or thought of. + +[slide 16] + +PFC: One option would be to use stable substrings. So this is not quite golden output but you identify a part of the output that is reasonably expected to be stable across versions of the third party libraries and data sources and across implementations even taking into account their own human interface guidelines. An example here on this slide, you want to test the date-time formatting with `dateStyle: 'full'`. Instead of asserting that the result is equal to some string that you have predetermined, you assert that the result contains the month name written out in full in English. So this more robust than comparing it against golden output, but it does share some of the disadvantages of golden output. It may be more stable across implementations and time but it is not entirely so. + +[slide 17] + +PFC: There is comparative testing. This is a principle where you can say that each setting for each input option must produce a distinct output and this could be good for getting coverage of all the code paths in implementations and making sure that each line is exercised, which in some cases is a goal. There is an example here on this slide. You know you can format a date with the weekday either narrow, short, or long. And you can reasonably say that the narrow week day should not be equal to the short week day which should not be equal to the long week day. But that assumption does not hold in all cases. The second stanza in the code sample is doing the same thing for the day parameter where the options are numeric and 2-digit and if you have a day that is greater than nine, the numeric day will be the same as the 2-digit day because there's no zero-padding necessary. So that approach would fail, and you need to do it judiciously. + +[slide 18] + +PFC: And there is metamorphic testing which RGN pointed out to me, where you find invariant properties of output that must hold across multiple inputs. And this is nice because there is no need to actually specify what those properties are exactly; you just have to specify that they hold. That sounds easy but it is not easy in all cases. So here's an example in this code sample on this slide. You format a date with just the day. You format the long month name. And then you format with dateStyle full. The property is that the full dateStyle should include the day which is not zero padded, and include the long month name. And I think that is a reasonable assumption if you want to test full dateStyle without hardcoding golden output. But again, it does not hold in all cases, or sometimes finding these relationships can be difficult. + +[slide 19] + +PFC: So, that is an overview of the things that I look at when I am looking at ILD tests in test 262 and I would love to hear further thoughts. There is an issue here that you can click through to and continue the discussion on as well. I'm especially interested to know that what kinds of guidelines are helpful for implementations here? I am assuming the most helpful is that each implementation tests that output is exactly what they expect. That is probably not feasible for test262 because we permit variation between implementations in certain cases. So what would be the next best thing, and I would be particularly interested in hearing that. So I will open up the floor to questions? + +SYG: I did not understand the 'mini-implementation' and how can you test the polyfill with the actual method? + +PFC: I can put link to an example of this in test262 in matrix. + +https://github.com/tc39/test262/blob/61fcd7bd565e01f795e55080ed9af70b71adb27e/harness/testIntl.js#L2517 + +SYG: I can read the link, no need to explain + +PFC: Okay. + +EAO: Your presentation reminded of testing that I think we ought to be doing in particular for `Intl.DateTime` . Two or three years ago one of the spaces in en-US date formatting changed from a simple space to a thin space, and this was being used by sites that were presuming that they could format the date as using the 'en-us' locale and rely on that format being supported by the built-in datetime parser. I think it would be appropriate for test262 to test for the changes that would impact users who are using internationalization for non-internationalization purposes. Other examples include the ways that are currently used for formatting dates using year-month-day representation by formatting dates in Swedish, or with the calendar: ‘iso8601’ option. If these things change due to CLDR data and ICU implementation changes that theoretically ought to be fine because this is internationalization, but in practice things will break and test262 should be pointing to that stuff breaking. + +SYG: I don’t know about this Swedish thing but I do agree that the 'en-us' thing given its reach and its basically the default chances that people already depend in it on the web. So probably should be treated as stable if is there no intersection among the implementation currently, of course that is a good signal that maybe it is ability is not as needed or things for which there is intersection among different browsers and yeah it would be good to get an early warning that something changes en-us. And I am talking about actual golden in this case that would be anything that would give us a guarantee that something is stable and your possible alternative in stable sunscreen compared to testing and all of that stuff, I am not exactly sure yet how I would think about what kind of guarantees they would give me as an implementor and I see a test break, and it is a stable substring. It might tell me that means there is less likelihood that the parsers will break but I have no idea how people will write that specific output and I guess comparative testing and—that is all to say the most important thing to me as an implementor is stability for 'en-us' for sure. + +SFC: So sort of reply to what SYG was saying and I think basically the gist of this particular line of thinking is that if there is an assumption that developers are making about invariants that the standard library has, that it is you know is not assumptions that are not intended to be made, right? That is sort of our definition of abusing INTL libraries and if we can detect those—not the word detect yet but if we can identify what those are, then like I think there is a reasonable argument to be made that those could go into test 262 because you know that would be be basically an early warning signal, however I don’t know if necessarily test262 is the right for that purpose because it is trying to test conformance to a specification and will it break the break and maybe test 262 can be that thing but I want to be clear this is a different use case than trying to test implementation conform to this spec. And regarding the 'en-us' thing, and I think there is an argument to like—I know that is a proxy to the thing to the real problem which is you know code that abuses individual APIs and you know we have evidence that there is a popular stack overflow question about the Swedish thing which is maybe one reason I thought about that and I don’t necessarily believe that every anticipate will accept an en-us locale that needs to live up to the same standards for the one that we found, and based on the question how do you do timezone conversion in JavaScript and you can use en-us day time format and date the code and then you have that assumption built in everywhere. And I think that, you know, proactively testing en-us in test 262 is not the best solution there could be a shortcut because if we believe that en-us will carry a different value and like this is not the long-term goal we should be testing which it we should be identify what the use cases are and you know the other thing we could probably test. Sorry that is was a bit of a circular argument. + +SYG: I will say something stronger than that but for like my argument is really about risk management. It is not about doing the right thing for locale at all. That is an orthogonal problem that is done by other people but we keep getting burned, 'keep' is perhaps too strong a word, but especially after the en-us date format changed and how many things that broke. The working already shifted to how do we derisk this for future data changes? And whether something is a good thing technically to develop for a locale, is going to be weighed against what is the risk of this breaking again, and right now, not breaking is very much the highest priority. So, I would—that is the lens I will be looking at this from and you can make all the arguments you want about how you don’t want to compromise the long-term vision of data but should we update the data and take in the new changes that you think are great and the lens this is going to be judged from often is what the risk of accepting the update and will it break stuff. + +SFC: You have a valid point about de-risking. I'm saying "test all en-US with goldens" is not a great solution. + +SFC: moving on from the like developer assumption stuff into spec assumptions. Right, regarding the part about what you called metamorphic testing. Like a lot of things that are like encoded in the spec are safe to test. DurationFormat says that it is composed of number formats and list formats. That is a safe thing to test. Beyond that, testing whether datetime string contains date string as a substring maybe can work sometimes but it is not necessarily a spec assumption. It worked from time to time. + +SFC: What are we actually testing? We should be testing that a thing conforms to the specification and that maybe we should write in the specification the spirit of Intl function that it conveys to the users in the computable way. That is what we are trying to test and maybe we should sort of shape our assumptions around there around that. You know, like, for example, like can we ask the LLM, here’s the output of DateTimeFormat—if the LLM can round-trip it back, and that conveys the goal. I don’t think necessarily there is very much supportive of LLM in the testing pipeline but like that is the spirit of what the API will do. + +SFC: And my last comment, comparison against ICU. It is little bit like the polyfill thing that you had earlier on. The mini implementation—you could just fire up ICU4X or whatever and use that as reference implementation; you still have golden problem but the scope gets smaller, right? + +PFC: Okay, thanks. I see that maybe I should have requested a larger timebox but I have to go now. And I would invite everybody to continue giving their thoughts in test262 issue #3786 and thanks for the discussion. + +### Speaker's Summary of Key Points + +- With the specification permitting almost any results of ILD (implementation- and locale-defined) behaviour, test262 has to strike a balance between stability and adaptability, as locale data sources such as CLDR are often updated. +- When writing tests for ILD behaviour, testing against golden output or a 'mini-implementation' is not recommended. +- We discussed several other strategies that live somewhere around the middle of that balance: stable substrings, comparative testing, and metamorphic testing. +- The en-US locale, and to a lesser extent sv-SE, may need to meet higher stability requirements than other locales due to the prevalence of popular copypaste code that expects certain output from those locales. +- After CLDR replaced ASCII spaces with thin spaces, implementations became more acutely aware of compatibility risk. + +### Conclusion + +- Please feel free to continue the discussion on [tc39/test262#3786](https://github.com/tc39/test262/issues/3786). + +## `export defer` extracted from `import defer`: stage 2 update or for stage 1 + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/nicolo-ribaudo/proposal-deferred-reexports) +- [slides](https://docs.google.com/presentation/d/1ats5CbsgalobhnfFIR2b1QAdaLRe4yVI55meo_ARqdU) + +NRO: Hello, yes. So this proposal was presented this part of import defer a while ago and while exploring more and on the surface they look similar, and to there was much more complex to import defer and it lasts more and I propose one year ago to—we discussed des sided to keep it but I propose to leave export defer behind because the important part was ready like all of the different questions was ready to go to Stage 2.7 and it was left behind. + +[slide 4] + +NRO: So I will define what barrel files are. When it comes to barrel files with out components and libraries like lodash and components that you can use to build spec components together and it is exponent for them to just have a single entry parts that the export and the reason being a much nicer export for users for single library from all of functions and like obviously they do not by code and they have export from the declaration. And that’s actually problematic because like it is not get for export users and if you find the semantics that will cause unnecessary code for execution. And it is like you are loading the whole library and you are just using two or three functions. And unfortunately, like people use this a lot because the developer advantages are so great. + +[slide 8] + +NRO: So obviously we just don’t always load a lot of files from the browser and there is some current users and there is [INDISCERNIBLE]. This was similar things for other libraries. It is less use now because there is other solution which is a bit better that is tree shaking the list and bundlers try to analyze the code and when imports to use and they tried to detect which code does not have the effects so we know which export or import from statements can actually be removed. And they had different ways of doing so. For example, weback [] and side effects and so check the code and but all of these are very difficult because JSON die nappics and you cannot determine if there is effects or not, and this was done a lot during the first designs of rollup and parcel and so figuring out what that means for module and the answer was no it is just not possible. + +[slide 10] + +NRO: When it comes to node.js and when using common JS, so there is required things and you basically export an object with a bunch of getters, and this does not work with ESM. + +[slide 12] + +NRO: So the proposal is about—actually to understand what it is about let’s look at a quick example many so we have here prediction that loads button and some components library and the components library working with a bunch of components from a bunch of different files. And like I mentioned before, this like if you just rate what this is our file on the left component library and it does not actually meet them so one would say yes we have solution for this problem but that is a different problem because the goals was to keep the necessary work for things that do you right now but that does not work in this case, because like at some point, you still execute like those sub— when you didn't need to load them in the first place. + +[slide 15] + +NRO: This is not when we need to executes a thinking but whether or not we execute and whether or not we need to lead the thing or not. So we would mark this export as the third and defer means in the export position this module is only good if in it is part of this binding that is exported here. So this is specifically telling the JS file and if defer is not important you can just skip JS export. + +[slide 16] + +NRO: So it is like the same as this slide but I guess this is exactly how to work with the commonJS. + +[slide 17] + +NRO: So, export defer is like different from import defer which is what you want model—so you have the model available and preloaded and executed so you can make synchronously executed. And like this actually let’s us—it is also like trying to defer much so in + +[slide 18] + +NRO: this example given before and loading button and all of these other modules area much and so executed and they are not significantly and like you just read the read files from the harddisk. So the goal here to import startup performance by reducing unnecessarily initialization work. And there is different loading semantic, and so we are not going to check—it is good to show [INDISCERNIBLE]. + +[slide 19] + +NRO: export defer you will use it in start, and because like export differ ever defer from hello, we do not know if is file is imported and we don’t know if coming from file or not and we need to do it unconditional and we need to have it exported. + +[slide 22] + +NRO: So why from the language, and just one first advantage of having provides guaranteed tree taking that every one can rely on and if a module is marked—output of the model is explicitly telling us we can ignore the side effects effect if they are observable. And some cases this is less because like maybe some are able to do like is not granular and two things can work together and this can provide a baseline and this works when using ESM natively and you can get one step closer and using the browser implementation but just—and this is useful when you combine with import and like for example specifically evaluation of proper access. So instead of the just loading and prefer when possible keeps execution. + +[slide 23] + +NRO: So, I'm in a bit of a weird situation. As `import defer` got to stage 2.7, and present these other time, so I don’t know if today I should be presenting this as update like conformation and if to discuss this Stage. But I would like I guess being preamps but if there is a preference problem here and so this is proposal for Stage 1 and proposal that is just a request of protocol and the branch of proposal. And there is like almost complete syntax and there is a couple to do and there is bugs but it is almost complete. We can go to the queue but I have a couple of more slides but I can go faster. + +[slide 25] + +NRO: How does this integrate with `import defer`? You could have different keyword from import and export side, and this is from a different proposal, and this screen is not effected by this proposal but have a mod.js that inserts start from mod JS and so in this case everything is going to be loaded and both the foo and the bar will be loaded and execution of mod.js and the dep-bar is for start up and we executed—wait, this slide is wrong. We execute—we execute the dep-foo and the dep-bar. + +[slide 26] + +NRO: Instead we have import and we have a clear list of names, and export defer, like in this case here on your screen, here there is no defer execution and during the proposal we do not have deferred execution by access, however, we know from mod.js we’re only importing foo and we keep executing that bar. When it comes to `import *` and `export defer`, what you get the next case of object and various names can be individually like isolated for execution from keyword and in this case it would be a dep-foo but the bar and together in this case except that we avoid execution and avoid executing mod.js because that is executed in the dep-foo and know that in this case and in the case where you have import without defer, potentially both foo and bar will be executed later and we need to executed async in both of them. + +[slide 31] + +NRO: And there is a proposal—this is during Stage 2 and I am mentioning this if you have opinions about this. And now let’s go to the queue. + +JHD: So, um, this is not better and it is not currently capable in any implementation I have seen of removing as much code as just importing directly from the files you need instead from a barrel file. So, it should still be—we should be telling people and encouraging people not to use barrel files but this proposal is great because it makes tree shaking do less of a bad job and maybe we can see it will do a good job with this change but skeptical but I would like to see this advance and underscore for the group and notes that tree shaking is currently and always a sub par solution. + +NRO: I talked to maintainers. I plan to just keep interacting tool maintainers with them individually to make it as good as possible for tree shaking. + +JHD: Thank you. + +WH: On the “import * & export defer” slide you mentioned that, even though `dep-bar` is not imported, its async dependencies are? + +NRO: Yes. like in this case. + +WH: How common is the situation in the ecosystem that, even though you avoid getting `dep-bar`, you get the nest of its dependencies which are executed anyway? + +NRO: Um the reason here is that so this is async is for proposal and if you refer execution and you go to dependencies and reason being that you cannot defer asynchronous module to synchronous execute that synchronous for access. And in this case, we like if we are just doing this `import *` and later asynchronous will go through bar, and only way to make that work is by executing the async dependencies. + +WH: If `dep-bar` synchronously imports some stuff, that does not get evaluated, right? + +NRO: If it is async, it does access that. If it is synchronous it does not get adopted. + +SYG: Um, so like do we know why the tree shaking does such a poor job with the—do I care about side effects or not? Is that extent much it or is there other issues as well? + +JHD: Yeah I think that tuple will make it harder but I think it is any import can be side effecting. So if there is binding there is no side effecting but that is not a safe assumption for bundlers but for linting rules. I would assume that it is really difficult to do the safe analysis or whatever the appropriate terminology is to figure out which code you with delete and what code you can’t before determining if you can delete the actual import of the file. + +SYG: Um, how are that—you just said how it is hard or difficult to delete, how is that solved by this? + +JHD: In my view at least, it is that you don’t need to even traverse the deferred subdependency graph unless the object is triggered or passed to a function or whatever which you know on some level it is statically determinable. And not perfectly but sufficiently. + +SYG: So from the—I have annotation that I don’t care about any of the side effect of the tuple it comes from and is that equivalent to import defer inside of tools would use that for tree shaking? + +JHD: It is possible - I don’t use treeshaking tools myself but it might be right that it would not help that use case, but even if it definitely didn’t, I would enjoy it as a syntactic marker of that property. But yes I don’t have the answer to that. + +NRO: So the syntactic marker like if I do for example, in JS, and this proposal does not have it if actually used or not but I can [INDISCERNIBLE]. But it knows for sure and for the keyword it does not need to check the site of bar to try to figure out if it has syntax or not. It can just blindly remove it. + +SYG: But what I mean is that the problem—is a fair characterization of problem is that the reason tree shaking does so poorly is that it requires correct like out of band explicit annotation like in this case dep-bar does not have anything I care about but saying in export defer I signal my attempt from dep-bar does it come down to that? + +NRO: Yes when it comes to some tools and other tools, it actually tends to perform in some cases vector that it will try to do side effects or not without relying on the annotation. + +SYG: So not relying on—I think I under but segue in my next question which is I think the problem you present for improving performance sympathetic to but it comes down to that the current way that the ecosystem is work around it insufficient due to basically lack of good annotations of this because this solution for at least the tree shaking problem comes down to that we will let programmers annotate. Right? Like you will still have to annotate it except but by expert defer by some other tool-specific thing. Have you thought about trend of instead of bundling into referral one time behaviour of signaling that the exporter does not care about the side effects of the top level of the module there they are exporting or importing in this case and reexporting and since we have import attribute after the top of my head one possible way to signal that annotation is you know I don’t care about `side effects:true` or something like that have thought about that? + +NRO: So one alternative I considered was just to have it like dedicated line in bar that side effects don’t matter. You can see here that suggest there is no production of this export keyword. And I guess that exactly not semantics but carrying on this proposal and we like this okay, let’s mark this to keyword and turn onto it. Because like it effects semantics in a way that you would not be able to otherwise represent in JavaScript and proposal you can just have the import file fix imported file will export the object while this is giving semantics that you could not normally explain. + +SYG: Okay, um, my general concern here is that kind of feature matrix for what ESM and the qual features can you do and now adding TLA and defer and export size with different phases and the feature matrix for ESM is getting very complicated and I think that is in general a thing that I want to simplify, and so, I think there is a problem in as I said beginning sympathetic to the problem that you presented, and I think it is important to solve and perhaps export defer is the best way to solve it but I think in any isolation a lot of the things when you look at in isolation, that they are motivated, but like the ESM story is not something I think is in a good place narrative wise and with WASM to be cog any constant of that. + +NRO: I understand and using this in this case complexity but change— + +SYG: There would be no extra transitive referral behaviour and it does not resolve in that Simplification. + +ACE: Sorry, tree shaking aspect, it is doing two things and you are saying one you can skip load this at all if I am not referring to the binding but then also the other things,if I am doing import * then it is saying like you can easily evaluate these things so it is not purely a replacement of the package.json […] the marker but making the evaluation lazy. + +ACE: I do agree that this is just adding more things to ESM. I think the real shame here is my opinion and we should have just— this should have been the semantics of export binding from the start. And I don’t ideally we can do a break in change and this is what like when you are really exporting something it is like an alias thing and this optimization should have been the default. + +NRO: And I would say that that is problem that is happening we should just recommend it to export—because we don’t need to use export from that is relying on the set of export module. Which like if you do, you shouldnt. So I see MM saying happy to go to Stage 2, end of message. And then I see Jack saying support for advancement, no need to speak. And Dmitri saying support switch, end of message. + +NRO: So confirm Stage 2, and consensus for this? Does anybody prefer to go through like start from 0? + +SYG: Clarify Stage 2 this is separate proposal from import defer? + +NRO: Yes. + +CDA: You have explicit support for Stage 2 from MM, DLM. Does anybody not support this for Stage 2? + +CDA: There is nothing and nothing in the queue and that will bring us to the end of the meeting or end of the day. Thank you Nicolo. And thank you everyone and thank you to our notetakers and we will see everyone tomorrow. + +### Speaker's Summary of Key Points + +- `export defer` has been presented before when it was combined with the `import defer` proposal. It aims at reducing the overhead caused by 'barrel files' that re-export values from many other modules. +- `import defer` was advanced to Stage 2.7 without `export defer`, due to the additional complexity with handling re-exports +- An explanation on how `export defer` differs and composes with `import defer`. +- One significant difference is that `export defer` allows for module loading (network requests) to be skipped, whereas `import defer` only defers execution. + +### Conclusion + +- Reaffirmed that `export defer` is at stage two, continuing from when it was when `import defer` was split off to proceed on its own diff --git a/meetings/2025-04/april-17.md b/meetings/2025-04/april-17.md new file mode 100644 index 00000000..e45a1baf --- /dev/null +++ b/meetings/2025-04/april-17.md @@ -0,0 +1,592 @@ +# 107th TC39 Meeting + +Day Four—17 April 2025 + +## Attendees + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Chris de Almeida | CDA | IBM | +| Waldemar Horwat | WH | Invited Expert | +| Michael Saboff | MLS | Apple | +| Nicolò Ribaudo | NRO | Igalia | +| Luca Casonato | LCA | Deno | +| Dmitry Makhnev | DJM | JetBrains | +| Bradford C. Smith | BSH | Google | +| Samina Husain | SHN | Ecma International | +| Ron Buckton | RBN | Microsoft | +| Istvan Sebestyen | IS | Ecma International | +| Daniel Minor | DLM | Mozilla | +| Jesse Alama | JMN | Igalia | +| J. S. Choi | JSC | Invited Expert | +| Ashley Claymore | ACE | Bloomberg | +| Gus Caplan | GCL | Deno Land Inc | +| Zbigneiw Tenerowicz | ZBV | Consensys | +| Eemeli Aro | EAO | Mozilla | +| Mikhail Barash | MBH | Univ. of Bergen | +| Ruben Bridgewater | | Invited Expert | +| Shane F Carr | SFC | Google | +| Daniel Ehrenberg | DE | Bloomberg | +| Dominic Farolino | DMF | Google | +| Michael Ficarra | MF | F5 | +| Luca Forstner | LFR | Sentry.io | +| Kevin Gibbons | KG | F5 | +| Josh Goldberg | JKG | Invited Expert | +| Shu-yu Guo | SYG | Google | +| Jordan Harband | JHD | HeroDevs | +| Stephen Hicks | | Google | +| Mathieu Hofman | MAH | Agoric | +| Artem Kobzar | AKR | JetBrains | +| Tom Kopp | TKP | Zalari GmbH | +| Kris Kowal | KKL | Agoric | +| Ben Lickly | BLY | Google | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Erik Marks | REK | Consensys | +| Keith Miller | KM | Apple | +| Mark S. Miller | MM | Agoric | +| Chip Morningstar | CM | Consensys | +| Justin Ridgewell | JRL | Google | +| Daniel Rosenwasser | DRR | Microsoft | +| Ujjwal Sharma | USA | Igalia | +| Chengzhong Wu | CZW | Bloomberg | +| Andreu Botella | ABO | Igalia | +| Andreas Woess | AWO | Oracle | +| John Hax | JHX | Invited Expert | +| Jon Kuperman | JKP | Bloomberg | +| Philip Chimento | PPC | Igalia | +| Richard Gibson | RGN | Agoric | +| Romulo Cintra | RCA | Igalia | + +## Disposable AsyncContext for Stage 1 + +Presenters: Chengzhong Wu (CZW), Luca Casonato (LCA), snek (GCL) + +- [proposal](https://github.com/legendecas/proposal-async-context-disposable) +- [slides](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit#slide=id.gc6f73a04f_0_0) + +CZW: This is CZW from Bloomberg, with LCA and GCL from Deno, and we are going to present disposable `AsyncContext.Variable` today, and what do we have with `AsyncContext.Variable` is we already have `AsyncContext.Variable`, and it is good to fit—to provide a strong encapsulation to both users and frameworks that their mutations on a single `AsyncContext.Variable` cannot be leaked out of the function scope they provided to the— as a variable to the variable that run, and so this provide a—provides a strong guarantee that and a mental model that their notation can only be seen by a subtask inside of the function scope. And this API and pattern also fits in well in manual web APIs and frame works, given that they can use if `AsyncContext.variable` to run as a job repolice station to code ex. So the code can just replace what they have as a listener to wrap the listener, and there’s no change to the function, shape or function parameters. However, if a user wants to modify the `AsyncContext.Variable`’s value, that encounters an issue that they cannot use this planned pattern in their arbitrary context generator like constructor, given that we cannot—we cannot replace the super or break or continue, like, key words like this in the new menu function scope with a very naive replacement. + +CZW: So let’s have a look at a recap of how the `variable.run` works. So in a short overview, an `AsyncVariable.run` can be seen as equivalent to a try/catch scope that replaces the `AsyncContext` mapping with a cloned new mapping with the new `AsyncContext.variable`’s value being swapped, and when the function is being evaluated, if there is not any interruptions, we will swap back the `AsyncContext` mapping to the previous mapping, so we might find it kind of similar to what we have with the using decorations, so can we enter variables without creating a new function scope to address the problem that we presented. + +CZW: So we would like to present that. Can we introduce the `using` declaration support to the `AsyncContext.variable`, given that they are semantically similar and it can also adjust the prevalence that we just raised and fix existing functions context well without requiring users to refactoring their function in order to use yield or any kind of these key words. And the question is can we do something similar to what we have when we introduced the using declaration support to the `AsyncContext.variable`? We still want to preserve the encapsulation that the `AsyncContext.Variable.run` supports, and we want the usability improvements that it can be done with the use integrations, so we will visit the problems that we might face with user integrations on an `AsyncContext.variable` later, but let’s see what we can do with this support. + +CZW: So the primary use case that we have is that what—if we want users to use, to create their performance tracing spans on the web, so we have said that `variable.prototype.run` fits good in what frame works where these frameworks takes a user function and sets up the context for users, but if a user wants to perform mutations on the `AsyncContext.variable`, they need to refactor the code in order to use the run pattern, but in the tracings case, users don’t want to refactor the code just to add tracing spans to record how their operations perform, so this `.run` pattern is kind of harder for users to adopt in the existing functions context. So what we want is not just to add declaration support for `AsyncContext.variable`, but all the library wrappers that could wrap under the `AsyncContext.variable` . So library wrappers can extend the functionality of an `AsyncContext.variable` and provide additional support for a tracing library, they can wrap their span around the `AsyncContext.variable` and provide methods like setAttribute for users to conveniently access this functionality without refactoring their code heavily. + +CZW: So comparing to the `AsyncContext.variable` proposal, which is at Stage 2, this new proposal builds on top of the existing `AsyncContext.variable`, but we want to introduce syntax integration to improve usability, just like how we did with promise and async/await. Like, we have promise and we have `promise.then` , and it’s kind of an improvement to introduce async/await syntax support on top of promise, but we still need promises functionality inside the language. So given this similar idea, we want to improve the `AsyncContext.variable` with the `using` declaration to help you observe to reduce—avoid refactoring the code in order to mutate a single `AsyncContext.variable`. + +CZW: Before we go to detailed solutions, I would like to go to the queue to see if there are any questions regarding the moderations of the proposal. + +SHS: In some cases it’s impossible to adopt run, such as in test cases like jasmine and others. + +CZW: Yeah, I think, yeah, that’s kind of an observation that we found in the test frameworks, and they provide before and test and after test, so all of these functions are separated into, like, different function scopes, so in this case, the `AsyncContext.variable` to run pattern does not fit in, but to be honest, we—like, in real world use cases like tracing, the tracing library can provide all the alternatives to address the testing facilities, so even though it’s not the primary concern that we raised this new proposal, but I think this new proposal can also help to improve that use case. + +SHS: Yeah, I think it’s more general just in terms of being able to-any `AsyncContext.variable`, not having to do a special thing for every context async variable, and it’s just an automatic variable to be done with this. A much more general solution, I guess, is what I’m trying to say. + +MM: Yeah, so I just want to see if I can rephrase what’s been said so far in terms that I more strongly relate to, just to make sure I’m oriented. The current `AsyncContext` run, you have to give it a function, and then the new temporal scope, the new binding of the variable applies over the execution of that function, and that the—and that the—sorry, that the variable, you know, thinking of the nested scoping as scoping, the variable’s only shadowed within that function, there’s no equivalent of assigning to the variable. The variable does not change within the prior scope. Now, there’s several constructs in the language that can be understood in terms of transforming to continuation’s passing style. Not that it could literally be implemented that way, necessarily, but yield within generators, await within async functions, and the using for disposables all can be understood as doing something to the continuation of the execution. Dispose is different from the others because the continuation is only within the block. + +MM: Now, the question is, the—are you proposing something that would change the using mechanism itself or is the change to async function just writing on the using mechanism as it exists, and if so, I don’t understand how the using would introduce the new shadowing scope, because once again, it’s important for `AsyncContext` that it only be shadowing, not assignment. For one thing, if it was an assignment, then a snapshot of the context could change its meaning when the snapshot was revised after an assignment. So that’s it. + +CZW: Yeah, in the coming slide, we will explore different solutions. Definitely, if it is possible, we want only add the using, like, the `symbol.dispose` or `symbol.enter` on the `AsyncContext.variable` and reduce all the functionality with using decoration. And we will also explore how to avoid, like, the concepts of being able to leaking out the encapsulation of the current scope, so maybe we can revisit this question when we go through all the slides later points. + +MM: Okay, that sounds good. On this slide in particular, `AsyncContextSwap`, that’s something new that’s coming in with this proposal. Is there anything existing? + +CZW: It’s the abstract operations in the `AsyncContext` proposal specification. They are not exposed to users. It’s written here as an illustration of how the current run works. + +MM: Okay, could you remind me what the—I don’t remember an `AsyncContextSwap`, and the name certainly sounds imperative rather than, like, you know, like an assignment more than a shadowing. Could you explain what `AsyncContextSwap` is. + +CZW: It is not an operation exposed to users, so it’s an underlying abstract operations to replace the mappings on the agent that contains the variable value slots. And what is exposed to users is that when the run finishes, the evaluating—evaluating the given function, it will swap back to the map—previous mapping that when the run was invoked. + +MM: Okay, I’m not sure I understand, but I think I’ll postpone further questions about it. + +GCL: `AsyncContextSwap` just takes the current `AsyncContext`, returns it, and then sets the new value to whatever you pass to the AO. So the way it’s used in the specification text is to build a stack, basically, where you, you know, push a new value to the stack by assigning it to the global `AsyncContext`, and then later pop that value by assigning the previous value that was returned. + +DE: And it effectively is enforced because only the two run methods only ever call `AsyncContextSwap`. + +MM: Okay. So it’s always stacking, it’s always balanced, and the snapshot is when you do a snapshot, it’s always of the current bindings—I think I’ll postpone until I have more context. But thank you. + +USA: That was the queue. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_22#slide=id.g3483f5889db_0_22) + +CZW: Okay. Cool. Maybe I can go with the slides. So let’s explore solutions that we could have. Right now we have three possible solutions. The first solution, A, it reuses the current using decorations mechanism and potentially we would like to also use the Stage 1 "enforced using declaration" proposal, which is the `Symbol.enter`, and the solution B and C enhances the using decoration to be a—to allow the `AsyncContext.variables` being used with user integration still being enforced in the current scope. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g34c9fd99034_11_14#slide=id.g34c9fd99034_11_14) + +CZW: And we would like to enforce the scoping with the using decoration `AsyncContext.Variable` on all three solutions, and not just possible. The proposal for solution A, it could be seen as transformed to this code [slide "Proposal A"] that when the `symbol.enter` is invoked, it will swap the mapping with the value being snapshotted, and it will reset the value or a variable when the `symbol.dispose` is being—method is being invoked. So in short, the `Symbol.enter` captures the variables current value, and then enters the async variable with the new value so user can observe the new value after the using decoration, and when the dispose method is being invoked by the user integration, it checks that if the current `AsyncContext.Variables` is the last one that was entered, or if not, it should throw and enforce this scope and it's not reset. So if the user invokes the dispose correctly with the user integration, we expect that this `AsyncContext.variable` using disposables are correctly stacked, just like how we used with `AsyncContext.variable.run`. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_25#slide=id.g3494191011f_1_25) + +CZW: And so what are the context leaks that we mentioned? It’s not memory leaks, it’s only if user, when a variable value is not encapsulated within a synchronous function co-boundary, so it is only possible when user invokes the `Symbol.enters` manually without using a using declaration. So this is only possible in synchronous function calls. It’s not possible in async function calls, because async function calls-- async functions are wrapped by promise, and promise will continue to behave like `AsyncContext.variable.run`, and properly encapsulate it. And so, what if a synchronous user function really leaks, and the user manually invoked the `Symbol.enter` without invoking the `Symbol.dispose`? The issue is that if we introduce such capability, we may assume that any function code can leak, but in use cases like Stephen mentioned earlier, like in test frame works, this test frameworks may want the leaks to happen, because they have this, like, before end test and being split into three function scopes, so this could be their intention, and we—even though it’s in the our intention to allow users to do this, so it might be someone’s—it might be someone’s use cases to do it, like, in test frameworks. And in the equivalent `AsyncLocalStorage.enterWith`, leaks are possible, because proposal A does not enforce using of the `using` declaration, and we recognize that synchronous leaks can cause expected behaviors, and we would like to call for general use cases for such behavior. And we also would like to highlight that this is not unsafe as that this value leaks are only visible on the observable if you have access to the `AsyncContext.Variable` instance, so you can not observe any synchronous leaks if you don’t have the access to the `AsyncContext.variable` instance, the `AsyncContext.Variable` object itself. And we will propose solution that cannot leak synchronously, and we will continue to explore and allow you to look at. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_39#slide=id.g3483f5889db_0_39) + +LCA: Thank you, yes, so as CZW said, the synchronous leak problem with A is that they expose a function to user code that can enter interview a context without it being forcibly exited, which means that a user can enter a context and possibly leak it out of a synchronous function scope, and proposal B and C both try to prevent this by two different mechanisms, tying the enter and exit of the `AsyncContext.variable` value directly to the using declaration syntax, so making the enter and dispose method behave in sort of a special way when called from the using syntax and not behaving in a way that would let you synchronously leak out a value when manually called by user. + +LCA: So the way that proposal B does this is by still having the `AsyncContext.Variable.prototype.withValue` method that returns an object with an enter and a dispose method, but calls to this enter method do not actually enter immediately, so if you look at the value of the `AsyncContext.Variable` directly after calling `Symbol.enter` manually, no value will have been entered. You will still see the previous value. Instead the `Symbol.enter` method records using some internal state whether it was called or not. And then the `using` machinery, when it is done calling the `Symbol.enter` method on the object that was passed to it will actually perform the entering, so that the entering happens within the using machinery and not within the synchronous function call. And then the `Symbol.dispose` method—sorry, the `symbol.dispose` method doesn’t actually do anything, and instead the `AsyncContext` restoration method happens entirely within the using machinery. Can you go to the next slide? + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_1#slide=id.g3494191011f_1_1) + +LCA: And I’m sorry for the small code here, but the way that this would work is essentially as described here. So there would be some changes to the actual behavior of using tech alation to be aware of AsyncContext and as you can see here there, sort of a stack of snapshots that the `AsyncContext.Variable` with value `Symbol.enter` method can use to record values to be entered at the end of the `Symbol.enter` call. And then in the dispose part of the using declaration year, we dispose, and then reset the AsyncContext variables to the previous value. And then forces that the values can be set to the sin tactical binding them to the lexical scope where you’re calling `using` declaration. You can not manually leak out an AsyncContext Variable from a function scope or any other scope because it will always be reset by the `using` declaration. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_47#slide=id.g3483f5889db_0_47) + +LCA: Proposal C does something very similar, but with a slightly different approach. Where instead of there being sort of a behavioral change insider using, we instead add a new internal enter and exit—or and dispose slot to the `AsyncContext.Scopable` object, which is the object that would be returned from `AsyncContext.variable.with` value. That using would call instead of a `Symbol.enter` or `Symbol.dispose` method if they’re present. And these are internal method that cannot be called by user code. They’ve only callable by using syntax, so the user cannot manually enter and exit. And this has exactly the same implications as proposal B. It just works through a slightly different mechanism where instead of it sort of being a side effect to call `symbol.enter` and the using machinery sets, that it’s a using slot on this method. And that has implications for ShadowRealm boundaries, which I’ll get to in just a second. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_59#slide=id.g3483f5889db_0_59) + +LCA: Yeah, so let’s start with the cons of each of these. Proposal A can leak, as we’ve discussed. Because of the fact that we do not want two to allow interleaving of variables, which means enter and exit must always be balanced, there has to be some slightly more complicated logic in the enter and exit functions to ensure that you cannot—yeah, that, like, you cannot call enter and exit in an unbalanced fashion. But, yeah, you can not prevent the actual scope leak. You can just prevent the exit happens in an interleaving fashion. + +LCA: Proposal B adds this sort of new global mutable state into the using declaration, but it’s not really problematic. It has exactly the same user observable semantics as using `AsyncContext.Variable` right now. Like, you can only use see the mutability for your own variables, which is not different from giving somebody the ability to enter an `AsyncContext.variable` using `AsyncContext.variable.prototype.run` context run. + +LCA: And proposal C as unforgettable internal slots that probably cannot work through a ShadowRealm callable boundary because they cannot proxy pierce. So this is the main drawback of proposal C. And as proposed, you can only set one variable per scopable, but this is something you could change if there was use cases for this. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_54#slide=id.g3483f5889db_0_54) + +LCA: And then pros, proposal A requires no special handling of the using syntax. It is just three methods. One method with value method is in context vary constable the object that is returned would just work with using syntax assuming there’s a `Symbol.enter`. And it works well with proxies with no special logic anywhere. But, yeah, it has the ability the leak, which some also consider a use case maybe for, for example, this test use case. We’ll have to see about that. + +LCA: And proposal B, you cannot scope leak and it has—and proposal C cannot scope leak and asynchronous calls and it’s simpler to explain than proposal B because these are go internal slots, which we already have behavior like that elsewhere. But, yeah, we discussed the cons already. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g34833c460bf_0_34#slide=id.g34833c460bf_0_34) + +LCA: so I do want to quickly cover the thing from earlier where we don’t just want this to happen for the `AsyncContext.Variable` itself, but also for objects that wrap `AsyncContext.Variables` , and we think that this is something that done through composition of `symbol.dispose` , and it’s slightly different for proposal C, but it’s still possible, where you could have an object that internally contains an AsyncContext variable and you call with value and `Symbol.enter` and manually a call to `Symbol.dispose` inside of `symbol.dispose` of the object you’re actually passing to using. You can see that illustrated here on the code on the left. Next slide. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g34833c460bf_0_47#slide=id.g34833c460bf_0_47) + +CZW: Yeah, so `Symbol.enter` is optional for proposal A and C, but we would like to say that it’s favourable because we can enforce that an `AsyncContext.variable` integration with unit integration are enforced: it must be invoked with a `using` declaration, and we can—it’s not expected to be invoked without dispose, and it also allows library integration, like the previous slide showed that with convenient extension. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_68#slide=id.g3483f5889db_0_68) + +LCA: Sorry for that. I just—my Internet stopped. Okay, so then, yeah, disclaimer, this proposal only works with async `contest.variable` and there’s no using integration for `AsyncContext.snapshot.` This is something we can talk about. We can talk about that offline. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_49#slide=id.g3494191011f_1_49) + +LCA: Yeah, so the summary is the `AsyncContext.Variable` prototype.run provides new behavior that is very useful for developers in frameworks especially, so when you’re not directory dealing with this but wrapping existing callbacks to a framework, that run requires a new function scope, which means widely using it in a code base that is not already using callbacks heavily often requires heavy refactoring, especially when using constructs like break or return. And we do expect there to be wide use of `AsyncContext.Variable` specifically for tracing, which is helped by a lot of instrumentation all across the user’s code base, so it would be good to make it as easy as possible for user to adopt this without requiring heavy repack or thing. And AsyncContext integration does support this the same way async await made it easier to adopt promises. And we’re specifically not looking to introduce new syntax and use the existing syntax because it does already lexical binding, which is what we’re after. And there’s currently three possible solutions we’re exploring, each with different tradeoffs and we’d be very interested to hear from you all what your thoughts are on these different solutions and also obviously on everything else on the proposal. + +LCA: So let’s go to the queue. + +RBN: I know I’ve discussed this with champions for AsyncContext proposal in the past, but my biggest concern with options B and C is they will break intuition with the disposal stack and how composition is intended to work with using declarations. Mainly, I've talked with a number of delegates in the past who when they talk about using the declaration proposal they generally think of the actual semantic behavior of using declaration syntax is that it’s essentially a syntactic sugar over just working with disposable stack. But this would—these two options, B and C, grant specific capability to `AsyncContext.Variable` and specific interactions with using that prevent these `AsyncContext.Variables` being used with the dispose stack or async disposable stack, which means they can’t be used in composition. So I’m really—that’s one of my biggest concerns, is that this is introducing a break in intuition with how any other disposable works. + +GCL: So, sorry, I think C, I would agree with you that C is not immediately composable with a disposable stack, but I believe the other two are. + +SHS: B doesn’t work either because with the using exits before you pass the enter or the variable itself to the stack. + +RBN: Yeah, can you go back to the example of the desugaring of option B. Yeah, this is—yeah, so here you’re—there’s two issues that I see with this. One is you’re introducing a syntactic transformation over a run time value. There’s no way that you can know when doing any type of static analysis and parsing without, like, a full, like, strong type system and reliable type system that the thing that you are passing in is an `AsyncContext.Variable.` This would require run time evaluation to know that it needs to do something special, unless you’re doing this for everything, and then you’re adding every single dispose. And then this AsyncContext enter/exit restoration functionality, it’s not, again, tied into, like, if I hook `AsyncContext.variable` and stuffed it into a disposable stack and I did it using around, that then it’s not necessarily going—if it’s in runtime detection is not going the detect that that’s actually part of that disposable context or disposable stack. + +GCL: You’re referring to the implementation wanted to optimize this to not always take a snapshot around using syntax? + +RBN: Your position is that with this approach is it will always snapshot at every using? + +GCL: I’m not entirely sure—I’m trying to clarify what you’re trying to claim. + +RBN: Let me go back for a second and say I’m trying to understand this slide. Is this asserting that every single using declaration would introduce an AsyncContext snapshot? + +GCL: It is asserting the observable semantics of the—this proposal. Whether the—like, are you asking about whether an implementation could optimize it to not do that? + +RBN: I’m asking what's—no, I’m not asking about optimizations or anything. I’m asking about if line 1 was not new `AsyncContext.variable` or line 4 was not calling with value, would this be doing the same thing for any other value? + +LCA: Within the spec, yes. + +RBN: So this will add to the belief to every single person using tech— + +LCA: No, no, within the spec. I’m not saying this is not—this cannot be optimized away. + +CZW: Yes, within the specification, this would just always perform the AsyncContext machine machinery. + +LCA: Yes, the same way we do for async functions, for example. + +RBN: Async functions I kind of understand overhead, because we’re working with async await, you’re not necessarily expecting the highest performance, because there’s always—there’s going to be some context switching and overhead from continuations, everything else that’s associated with that. But having, like, one of the interests—or one of the goals we have, or I have at least when it comes to shared structs proposal and shared memory multithreading is high performance applications that need to be able to work with locks will be using declarations to lock and unlock, and they do not want overhead. + +RBN: The—the other point to my topic is that in both B and C, there’s discussion about `symbol.enter`, and the proposal for `symbol.enter` that’s currently at Stage 1 is specifically about `Symbol.enter` being a more complex extra step to enforcing that you’re actually using a declaration with using that if you really, really, really want to or immediate to or have a very specific reason that you can call out to that method the invoke it, which, yes, could result in the potential context leaks you’re discussing, but the point it being a built-in symbol that 99.9% of people won’t have to look at because nail be typing using whatever equals and that value, means that people aren’t going to be reaching for this unless they really need, to people who really need will most likely be taking extra care about stack discipline. I’m not really certain that B and C are necessary in that context. + +NRO: Yeah, all times mentioned that this requirement is not all the time. Like, part of this context proposal design was that AsyncContext snapshots will be very cheap to get, they’re just copying a pointer, and you don’t actually need to, like, iterate through a structure to copy its values, which is why it’s okay to do it, for example, at every single await. So I wouldn’t worry about that too much. It’s just literally copying a pointer to somewhere else. + +SHS: Yeah, gist warranted to point out the overhead question, and there’s the issue of the order it happens. If the finishing the using is where we’re making mutation happen, that does break disposable stack, because you use using to enter the stack, and then you event kind of put more disposables on the stack without using syntax. And so that would not trigger the mutation mutation and that does break the intuition and the composability. + +DE: Yeah, I’m surprised by this suggestion of using AsyncContext Snapshot. I thought we were going to solve this by using general solution to making `Symbol.enter` more reliable in general, like RBN had proposed previously. I also want to say I think such a reliable `Symbol.enter` mechanism can work with DisposableStack, though it can be pretty complicated. It would mean that a lot of things that would previously be just a function call now would return a Scopable that could only be used with `using` again. The composition still makes sense, it just changes the interface, if the stack ever includes anything that has an enter that must be called. + +SYG: This is originally clarifying question and it has been clarified that proposal B is using hard coding to be aware when the right-hand side is an `AsyncContext.variable` ? Like, hard coding in the sense that no matter what the right-hand side of the using—sorry, not the right—yeah, the right-hand side. No matter what the right-hand side is, there’s this AsyncContext machinery that now happens both on the using—at the using site itself and in the finally block that they dispose of. I want to triple check that. That is what you’re proposing for proposal B? + +LCA: That is correct, yes. + +SYG: Okay. That is very unpalatable to me for a lot of the same reasons that Ron has said. But also it feels like taking a step back, it certainly feels like we’re running ahead of the solution space here. Like, there’s AsyncContext has been designed for while, with you there’s zero baking time. Using is barely shipped, only in Chrome, I think, I don’t think it’s shipped anywhere else. There’s not enough baking time. This feels—however I personally feel about this design, it just feels like given the maturity of the proposal dependency chain here, there’s—AsyncContext does not rise to the level of needing special casing in syntax that is itself very new yet. There’s a too much risk here that I don’t think something that ties `AsyncContext.variable` into a piece of syntax I think that is warranted. + +LCA: I do want to respond to this before we move on to your topic, Dan. Like, we have—there has been a—essentially equivalent proposal to AsyncContext as we have experienced through AsyncLocalStorage there for a long time, which is already being used for tracing. And we’re seeing a lot of the problems discussed in this presentation there right now. Particularly the very heavy need to use callbacks, and the very difficult refactoring hazard when you’re using some of these syntactic constructs like break and return. So this is, like, not coming out of nowhere with no practical experience. Like, in is based on the practical experience from using async local storage. Without having sort of— + +SYG: I think you misunderstood my position. I find proposals B and C deeply unpalatable because they basically hard couple AsyncContext variables to using syntax. Proposal A is palatable. I hear your problem statement. Proposal B and C is what I’m saying, is—yeah. + +LCA: Got it. + +USA: Less than ten minutes remain, so I would suggest everyone to be quick. + +DLM: Yeah, I just wanted to second SYG’s point. It came up in a genre view that this might be moving quickly. We have no concerns about this going to Stage 1 and we do kind of feel that more experience is warranted with both AsyncContext and using. + +CZW: Yeah, I think I can clarify that. This is the reason that we don’t want to couple this proposal with the Stage 2 AsyncContext proposal, and I don’t think we will proceed with this any time faster, because we really see the benefits that this proposal can be benefited from the .run enforced by using declarations with the `symbol.enter` . So, yeah, this is the reason that we want to propose a new proposal to advance to Stage 1, ask to advance to Stage 1. + +DE: I want to expand on what Luca was explaining about the motivation for this. First, I don’t think this proposal is essential for AsyncContext. I think `AsyncContext.Variable.run` is completely good enough and already corresponds to the, you know, common best practices for using AsyncLocalStorage. There are some uses of `AsyncLocalStorage.prototype.enterWith`, which is a different method that ends up letting you set a variable without entering the scope, breaking the stack discipline. So this proposal is an effort to get back some of those ergonomics, which I really do think are essential for AsyncContext. There are several—mine, not several, there are a few people in the no JS community who were especially interested in maintaining this ergonomics, so bringing this proposal to committee helps to get feedback on that as a possible future direction. That doesn’t mean we need to do it now. But it will be helpful to see this actually considered by the committee. That will be helpful to bring back to the Node.js community and talk through what that means. So it’s helpful whether or not we adopt the proposal. + +SHS: I agree that B and C are unpalatable. We should focus on A. And I think one of the main issues with A is that dispose can throw. Are we okay with that? Is that something that is acceptable? + +USA: Okay. Reminder that we have close to 5 minutes remaining. Item three items on the queue + +LCA: We want to ask for Stage 1. Do we have enough to go through the queue and then ask for Stage 1 or ask for that and— + +USA: I think so. I mean, if… you know, the items—depending how much time they take basically. You could ask for Stage 1. And go through the queue later. + +DE: Let’s go through the first queue. We are almost done. + +RBN: Yeah. I had a reply to SHS’s comment that dispose could throw if the stack broke. There is not really anything wrong per se with the disposed throwing. You shouldn’t if you can avoid it it. But the spec is designed to capture error and having dispose throw because you broke stack discipline is way to inform the user that they broke stack discipline and resolve that by adjusting their source code. So it seems like that’s a good thing. Rather than a bad thing. So I don’t—I don’t have an issue with dispose throwing in that case. + +SHS: Excellent. + +RBN: So LCA mentioned during the presentation that about composition. You said composition was feasible both with proposal A and B. I was trying to—I was trying to ask how it was feasible because it didn’t seem like it was. I think I might better understand that now when it’s—with the explanation that the snapshots are always happening around using. I do have some concern around the complexity of how that works with the—how that works with enter, but not that concerned about this topic anymore. + +LCA: Okay. Yeah. I think the way it’s possible is because using call `symbol.enter` on the outer object and the outer object can cause that on the inner object that has the side effect of mutating this snapshot that is set at the end of the original enter call. It’s possible with B or with C. If you do some prototype shenanigans on the return type of enter. But I don’t have— + +RBN: I am not sure—C does not seem reliable. It adds a lot of flexible to of the you are forcing the user to use a super type and class to do this, which really doesn’t—it might not work well can compositional cases + +LCA: Yeah. I tend to agree. + +DE: How are we preserving this stack discipline with generator? If you yield in the middle of a `using` block, if you never resume that generator, does this break stack discipline? Or does something about the generators work to restore the previous AsyncContextSnapshot at that point? + +CZW: The current Stage 2 proposal ensures that the generators are also preserving the encapsulation of the AsyncContext generators. So in the Stage 2 proposal, before and after the yield statement, `AsyncContext.variable` observe the same value. Regardless of what the caller will change the context async generator. + +DE: Yeah. That’s inside of the generator, but outside of it, if you call `.next()` and then that puts it inside of a using block, does it—how do you prevent not being an unbalanced thing? + +GCL: I would expect that we will specify all suspends for generators and async functions, and async generators to restore the AsyncContext. But we haven’t, you know, discussed the exact details there. I think not doing that, as you say, would kind of be, you know, it would sort of bring the same problem back. But yes, that’s what I would expect. + +LCA: Okay. Shane in the queue. Do you want to go to the final slide, Chengzhong. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g34833c460bf_0_52#slide=id.g34833c460bf_0_52) + +LCA: Yeah. Yeah. Ask for Stage 1 first. + +USA: Let’s see if we have any comments from the queue. Also feel free to support. Okay. We have support from CDA on the queue. + +RBN: Just briefly. I do support the idea of disposable `AsyncContext.variables.` I have concerns about options B and C. And I am still a little iffy on `symbol.enter` on its own. When you are doing UsingDeclaration, the idea you do the initialization when you do the acquisition. So calling with variables is essentially what the—would be a good place to actually change the context and do that. So I am still a little bit—even up in the air that option A is necessary, as long as you have a `symbol.dispose` . But considering it’s it is a proposal, we are still investigating it and looking at it, I don’t have an issue to continue to look at option A in that case. + +CZW: Thank you. + +SYG: I am going to clarify the reason this is a separate proposal and not folded into the existing AsyncContext proposal is because the champions think that this is not integral to AsyncContext. I would say that this improvement to the AsyncContext Stage 2 proposal. And Stage 2 proposal can work on its own and provides the functionality that we need to context. And this new proposal is essentially to improve the usability. + +CZW: This doesn’t answer why it needs to be a separate proposal. For it to be a separate proposal, I think it is mentioned, we would like it to—it depends on the `symbol.enter` which is also Stage 1 and we don’t want to—we don’t think it’s necessary to block the Stage 2 proposal. + +USA: There are responses, but I believe you have already answered, yes. They have gone away. That’s the queue. We are 2 minutes over. Let’s give a few more seconds to see if somebody has thoughts on Stage 1. + +DE: Do we have a definition of this scope or problem statement? Does it differ based on the different options? We have heard some opposition to some of them. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_18#slide=id.g3494191011f_1_18) + +CZW: Well, I think the—this page of the slides explains the iteration. Ultimate goal is to allow AsyncContext integration and we are—we could explore that, like, we said with `symbol.enter` or `symbol.dispose` that I think—even with solution B and C, I think this is—this page shows that we want to include the feasibility and the solutions. + +LCA: I think a more written out version of this is on the third to last-page, Chengzhong, the [summary slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_49#slide=id.g3494191011f_1_49) + +USA: I’m sorry. We are past time. Can we focus on Stage 1 for now? + +ABO: + 1 by ABO. Stage 1 as well. + +USA: We have not heard any negative comments. Please let it be known if you have any. + +MM: To be clear, you heard negative comments. You have not heard objections to Stage 1. I will put myself in that category. I am very concerned about this and doubtful there’s actually a feasible solution, but I am not objecting to Stage 1. + +CZW: Thank you, MM. I think we can bring this up to SES meeting and thank you + +USA: All right. I guess with that, we can conclude stage 1. And I hope you folks have a good chat async afterwards and try to find some of these things + +LCA: Thank you + +### Speaker's Summary of Key Points + +- There are concerns with solution B and C as they change the semantics of the using syntax. +- Solution A allows compositions in libraries and integration with the syntax. + +### Conclusion + +- Proposal advances to stage 1 + +## WHATWG Observables + +Presenter: Dominic Farolino (DMF) + +- [proposal](https://github.com/WICG/observable) +- [slides](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/) + +DMF: Okay. Perfect. All right. So my name is Domenic Farolino. I work on Google Chrome. And I am working on the observable API. Which is currently a WICG standard—or, specification. Before we go into the slides, I want to give some context. This is a pretty informal presentation. We are not—this is not incubated or proposed in TC39. We are not asking for specific Stage feedback or anything like that. But because we are pursuing this API which used to be pursued in the TC39, and we moved it over to WICG with upstreaming into a WHATWG DOM specification, we—myself and other browser vendors felt it was important to run the proposal and the design by folks in TC39. And try and just, you know, keep everything on the platform updates and ask for opinions on that perspective. That’s what I am doing here. + +DMF: I will start with the history of Observables. So like I mentioned, in 2015, it was a Stage 1 TC39 proposal, I believe championed by Ben Lesh—he’s the author of the RxJS userland Observables implementation. In 2017, it was proposed to instead move to the WHATWG DOM standard and incubated there. A lot of platform editors agreed with this approach and felt it was the right place for it. And that would be the best way to get it into developer hands faster. And some many years later, I sort of retook this proposal and formally moved it to WICG and created a specification out of it. And writing the implementation in Chromium. That’s the context for what we are here today. I just want to start by discussing what an Observable is. Before we cover some of the design details of the specifics of the proposal. + +DMF: So the best way to think about an Observable is that it’s like a promise. But for multiple values. Like promises, they are synchronously available, handles that represent async work. Which means, that you can act on them, write when you create them and call methods and operators on them. Even before the underlying source starts emitting values for their consumption. The main thrust of the Observable proposal is—the main way it integrates with the web platform is through the EventTarget interface. A big part of the specification is this new method on EventTarget called `when`, which creates an Observable that represents asynchronous stream of web platform events fired at the EventTarget. It’s like a better addEventListener, but integrating with the Observable API instead of just callbacks. + +DMF: So what I want is: this will enable you to write code like this. This should be `element.on('click')`. You can get an Observable by calling. This represents all the click events being fired on the element. And then you can start calling all of the operators on Observable like`filter`, like take all the click events, filter them, map the event to the data inside the event that you really care about, and then you can subscribe and add a handler. This is the linear pipeline that they offer, more convenient than clunky addEventListener callbacks and addEventListener callbacks. We think it helps you get out of the callback hell that promises help you get out of. But they work for async streams of values instead of one shot values that promises work on. + +DMF: So where do `filter` and `map` and so forth come from? There’s a list in our spec of all the operators. Some of them worked on observables and some made more sense to return promises. You can check out the spec for the list of them and the specs for them I want to cover some design details and talk about the internals of how this proposal actually works. So promises have two components to them. There’s a producer, which is the callback that the promise consumes, and this produces values and then the consumer, which consumes the value, in red, the thing in .then. Observables are similar. They construct very similarly, take a callback. And instead of, you know, just calling the resolve function or something, you get access to the subscriber object and .next() values to it. And the consumer subscribes and passes in various handlers. Not just `next` to get them. You can do `next` for signal completion because whenever you have multiple values, you need to signal that you can complete. And you can also `error` as well and signal that there’s an error to the consumer that way. So the consumer has the ability to respond to each of the events bypassing different callbacks that represent them. + +DMF: Some key components of observable that are different from promises shallings the first one is they have synchronous delivery. So back to this example. When you .next() a value on the right there, on the producer, it synchronously goes to the consumer and triggers the next handler. There’s no asynchronous microtask delay like promises have. + +DMF: The second one is that it’s lazy. This is a deviation from how promises work. So when you construct a promise, and you give it a callback, that producer callback runs immediately. With observables, the producer callback which produces values, actually gets saved as private state inside the observer. And it runs later when the consumer actually subscribes. In that case, they’re lazy compared to promises. + +DMF: Here’s an example of how that works with the observable produced by the method for EventTarget. You have this observable and it listens to an event. What that translates to is, this constructs an observable with the internal callback and whenever that callback runs eventually, you get—under the hood adds an add listener and forwards the value to the listener. The benefit being, you can call the observables immediately. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_278) + +DMF: One interesting design detail with our observable proposal is that the producer is essentially multicast. So this is a little complicated, but what this means is, you can see up top, the producer, the callback it takes, and everything 500 milliseconds will produce an incremented value. The first time the—the first consumer comes along, and subscribes, it’s `source.subscribe` on the left, it will fire the callback internally and run it. But because the producer is multicast, all subsequent consumers. They just listen in on the existing created producer that has values. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_261) + +DMF: We will talk about what happens when the consumers register the fact that they are uninterested in values and how that—how this listening mechanism works. But that’s part of the next section, cancellation and teardown. An Observable producer can stop producing values and be told to stop doing that. And basically, the way an Observable shuts down is through two different ways, the producer-initiated teardown, this is the producer callback under some conditions calling `subscriber.complete` or `subscriber.error`, and signaling to the consumer it’s done producing values. You are not going to hear from me. There’s `done` or `error`. A consumer would start a subscription then ask to end the subscription by aborting its subscription with an AbortController. Here’s an example of that. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_193) + +DMF: On the right, a producer, which is going to register a teardown to shut it down and ensure that it knows how to shop producing values. And on the left, we—the consumer passes in an AbortSignal associated with its subscription and at any time it can abort the controller for that signal and that triggers all the teardowns in the producer to run so that the producer knows to stop producing values for the consumer. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_118) + +DMF: Now this is tricky when we have multiple consumers because we can’t just stop producing values if not every consumer has aborted their subscription. The producer is ref counted for this reason. So we have observables to re-iterate, and they can have multiple consumers. They can have multiple consumers for that single individual producer. And once the refcount hits zero, that is finally when the producer will tear itself down. It can’t do that earlier than that because there could be other consumers that are still interested in getting the values from the producer. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_238) + +DMF: Once it’s torn down, resubscription—the observable is not dead. It can be reignited. Here’s a concrete example of that. Kind of playing on to the last example we saw. If we have three consumers interested in the values of this observable, this producer, then the ref count of the producer function is basically three. And when the first consumer at the bottom left aborts its subscription, we mark the ref count down to two. Same thing in the middle. One to down [?]. And finally, when the last one, aborts its subscription, the ref count is 0 and then we tell the producer that it’s safe to teardown. It tears down and stops producing values. + +DMF: This was a design change we made after TPAC discussion last year and after some developer feedback in shop around at different venues. It was seen as one of the bigger footguns in userland observables that they did not do this. And so it made sense to consider that feedback and deviate from community precedent in a way and it’s been received well so far. + +DMF: The current status of this proposal, I will just—this is a little out of date. But basically, yeah. We would like input. It’s the number-one reacted-to web standards issue on Github. Given the [?] spec reaction tool. People are interested in it. There’s a lot of developer hype at conferences and on Twitter and so on. So we felt it was important to prioritize this proposal and bring it to developers. We are gathering feedback from Node and from WinterCG and no negative feedback so far. Either neutral or positive. + +DMF: And so with that, I would like to thank some of the folks in TC39, JHD and KG, in particular, who have been active on the repository in giving us some feedback and helping us shape some of the nuance points of the proposal into what it’s become. At this point, it’s pretty much done. And like I said, myself and other browser vendors felt it was important to run this by and formally update TC39 folks as to the proposal and see if there’s any interesting discussion points that will come out of this. And just to basically keep everything updated and see if there’s anything major red flags that people spot. So with that, I think I am pretty much done with the presentation. We can open up for discussion or just end with a call for any feedback to be registered on the GitHub repository there. + +DMF: And yeah. With that, I think I am done with the slides. + +CDA: Great. Thanks for coming to the committee to talk about the proposal. We have a number of folks on the queue with some questions. First, we have MM. + +MM: So could you go back to the history where this started in TC39. + +DMF: Yes. Yeah. [This slide?](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_27) + +MM: Yeah. The—what I remember is that there was an observable proposal in TC39 and I don’t remember the time frame. Does it say Jafar (JH)? It does say that. Good. Good. Good. That history is correct. I thought I had heard a different name. I wanted to make sure this was cochampion by Jafar Husain (JH) and I. When he left the committee, I didn’t have the energy to keep going with it, which I would assume is part of why it went over outside of TC39. I do want to express that although I didn’t have the energy for it, I wish that once the energy arose to pursue it somehow, that it had been pursued in TC39. I do not understand why the right venue is outside of TC39. + +DMF: I think—so this was discussed a little bit. We have a section on this specifically, this topic. Venue choice in the explainer. WICG explainer. The gist is basically the proposal’s primary integration point with the platform was EventTarget. And it made sense to have a dependency also on AbortController and AbortSignal as a cancellation mechanism to unregister one's subscription. And I believe the Cancelable Token proposal in TC39 was contentious and given some of the layering how we expected this to be integrated with the platform, I think that motivated this—this change here and felt it was more appropriate as a what working group. It wasn’t necessarily a new language primitive. + +MM: I understand. + +DE: I think there are multiple ways that layering can work for both Observable and AbortController. I hope we can work together between TC39 and WHATWG in the future, rather than kind of in both directions trying to claim territory. In particular, this could have been done with EventTarget being the HTML integration on top [designed together, but not necessarily determining the venue of the core of Observable, similar to many other TC39 proposals]. With AbortController, I think there’s a possibility that we could make an API that’s even improved in terms of usability, as you and I have discussed that could be done in potentially either venue. So I hope we can keep working together on these things. + +DMF: Yeah. I would love that. I think that this was very much our intention of starting this up in discussion and trying to shop it around here and yeah. I very much second the—such vendor. + +MAH: Could you maybe go back to the example slide where you showed the subscriber? + +DMF: Let me see. Which subscriber? + +[Subscription slide with consumer and producer examples] + +MAH: So this surprised me a little bit. I was expecting—when I think observable, I think basically as a mechanism that could be built on top of iterators with basically some sugar to make them multicast. But I didn’t expect the subscriber to look completely different from or somewhat different from iterators. It seems like, for example, here, the complete and the error are very equivalent to the return and throw that an iterator could and user understood to have. And so I am wondering like in the design space, like, has there been consideration for observable as some sugar around iterators for producers and consuming + +DMF: The closest thing related to this, we have the observable from method and that takes in—I wish I had a slide on it. Maybe. I don’t think so. But that takes a promise observable, iterator or AsyncIterator and converts them to observables. So there's a lot of adaptation and conversion mechanisms between those. But if—is it the naming of the function that you are commenting on or… + +MAH: Yeah. I mean, I am wondering what is different about the observables that you—in their behavior that they’re not—they don’t have th—they wouldn’t follow the iterator shape and protocol? + +DE: This is something that was discussed by Jafar when he was explaining observable to TC39. Iterators are pull-based. You get them by calling next on them. Whereas observables are push-based. The event is sent. So iterators can only work for things that are buffered, whereas Observables, for Events, you often don’t want to buffer them. + +MAH: I don’t—I don’t know the production—so okay. What is the behavior when you’re producing a value, is the producer expected to block until all the consumers have consumed the value? + +DE: It doesn’t block. It just calls synchronously. Right. + +MAH: I see. So the consumption is not iterator-based. It’s callback based. Got it. + +JSC: This is an exciting proposal. Thank you for presenting it to TC39. I want to bring up interoperability, conversions, and dataflow between DOM Observables and ES Signals. ES Signals are another proposal we have, I think, at Stage 1 right now. They’re similar, but they’re different. I wanted to ask, have there been explorations on how an Observable could feed a Signal? Or vice versa, a Signal feeding an Observable. Said conversion API would need to live in WHATWG DOM, not ECMAScript, since Observables are coupled to the DOM. The situation is somewhat analogous to WHATWG streams and ECMAScript async iterables. I understand that Observables are closer to shipping than Signals are, so interoperability APIs could be deferred to a future DOM proposal. But this kind of interoperability and interchange should be explored early on. + +DMF: Yeah. I know DE and Ben Lesh have the most comments about the interoperability between Signals and Observables. I unfortunately remain mostly ignorant of the specifics of the Signals proposal. So I mostly let them speak on it. Maybe Daniel has thoughts on it, but I don’t know about Signals and I defer to other folks helping me design this. + +DE: Signals represent a current value, whereas Observables represent more like a stream of events. When you have an Observable, which represents “the value changed to this new value”, then you can make a Signal which represents the current value which was the last one. So Signals are not about making sure the callback gets called on every iteration. But instead, they enable you to have a calculation that’s dependent on the current value and that you can refresh that calculation when you want it. So based on the Signal proposal API, in particular based on the `Signal.subtle.watched` and `Signal.subtle.unwatched` callbacks that you have in the `Signal.Computed` constructor, you can make in just a couple of lines of code conversion function which take an Observable and expose it as a Signal. It would be subscribing to the Observable when the install is watched by a Watcher. And conversely, the conversion could go in the other direction. You could install a Watcher on a Signal that fires an event to the Observable whenever the value changes. So I think the conversion makes sense in both directions. + +DE: It’s important not to confuse them in terms of use cases, in particular. Observables have been misused for reactive rendering and that doesn’t work well in practice because it causes glitches. I really hope the way Observables are explained to developers makes clear that it’s not the core reactivity approach for the web. For certain cases, it does make sense to translate like that. + +JSC: Yeah. I understand that these two concepts, Observables and Signals, are complementary. I think that coordination by both standards’ champions will be really important in developer messaging, to make their use cases clear to developers on MDN, on Web.dev, other developer blogs, and whatever—so that developers know what Observables’ and Signals’ respective roles should be. + +JSC: With that said, I know DE mentioned that feeding an Observable into a Signal or vice versa takes only a couple lines of code. I am hoping in the future, in DOM, there will be one-line ergonomic APIs that make such conversion/interchange from one into the other very easy. + +DE: So, DMF, you mentioned that Chrome shipped this already. (It’s not a standard because WICG things are not standards-track inside principle.) I am wondering what the feedback has been from other browsers. + +DMF: We’ve had some—some neutral or positive feedback from Mozilla folks in person at standards conferences but [?] on the repository that have taken a lot of time to analyze it directly, and that’s relatively common… WebKit almost flipped their position bit [to positive] but really wanted it to be shopped to TC39 first, before feeling comfortable to do so. I expect a positive response from them, providing there’s no red flags or concerns. And there's an almost complete implementation of observables in WebKit as well. So it wouldn’t be hard to get them to ship in. They have been reviewing it pretty promptly. So yeah. You know, informally, positive and zero negativity. + +DE: Are you interested in input from TC39? You said it’s done. How would you like to work together from here? + +DMF: Yeah. For context, the intent was to give this a while ago for logistical reasons, this was not got to in one of the last meetings. It is being given a little late. I guess like you know an informal kind of check would be useful to go back to the report to the other browsers that there’s no major concerns about this to folks. If that is true. I think we’re close to that point because we have gotten good feedback from TC39 folks informally, JHD and KG particularly. Just a quick temperature check to make sure this doesn’t jump out as a completely horrible idea or like—these things need to change. And there’s no fire lit under people’s feet to like fundamental issues against the repository. Just to keep folks updated and it’s updated. I don’t think we need a formal thing and think that would be sufficient. + +DE: I am somewhat worried that people are going to use Observables for situations where they really mean to use Signals. And I think that was a big ecosystem problem when Observables were discussed in the past: That people thought they were confusing, and a lot of the confusion comes from this category misuse. It’s good for us to be adding it. But I hope that the educational materials about Observables avoid misdirecting people in that particular way. + +DMF: Yeah. I think that’s a good point. Ben Lesh, the creator of RxJS, seems to strongly agree with that. And I think all of the messaging he has been doing continually about how they interact with the platform and how they compare to Signals is pretty aligned with that. It’s possible to do more messaging to hammer down that point. But for what it’s worth, everyone discussing it externally, I believe, from the platform perspective is on the same page. + +DMF: What is that thing you mentioned, the temperature check tool? I don’t know anything about that but… + +DE: Like we have this thing where we could give emoji reactions. I don’t know if that’s what you want, but we use that for informal polls sometimes that are supposed to be non-binding. + +CDA: I don’t know if we need to have a check, but…Usually we have something more concrete that we’re trying to get a temperature check on rather than just “how do we feel about that proposal”? But let’s keep moving through the queue. + +JRL: Can you show me the slides where you call—this is perfect. The `observable.subscribe` here. As far as I can see, there isn’t any slide that uses the return value from this. Maybe it doesn’t return anything. We also have other slides where we were shown passing in an AbortSignal separately during the `subscribe`. The `subscribe` could return a subscription and the subscription could have the thing to cancel at that point. We have discussed the integration with iterables, the subscription, the return value here is the thing that could hold the asyncIterable method,that lets you get an AsyncIterable back to convert it back and forth. Like, it seems like `subscribe` should return something is my point. + +DMF: So I think you mentioned subscribe like you expect subscribe to return a—I think you said a subscription that you can cancel based on that. That’s not the case because the reason is that if you are the first subscriber, we run the producer function synchronously. And if it happens to produce an unlimited amount of values synchronously, until the consumer aborts the subscription, you never get a chance to abort, because `subscribe` would in the start. You pass the signal into the `subscribe`, that’s now `abort`, and abort the subscription if you need to with the controller inside any of the subscription handlers. There’s some discussion, an open issue about possibly because that’s such an obscure case, there’s discussion about actually having `subscribe` return something that’s cancelable and limited in that case. It doesn’t run anything and you do pass in a signal as a second dictionary to `subscribe`. + +JRL: The case you are describing is that subscription itself immediately calls the producer, the producer could start pushing an infinite loop and you would want to cancel at that point. Doesn’t that mean the producer is an infinite loop and we never do code anyway. + +DMF: No because if the producer is in an infinite loop and it’s like—you could imagine an example where it—the call stack, I call subscribe. That calls, and infinite loop. A for loop would be calling `subscriber.next` . + +JRL: Why would the producer on this side ever yield so that the consumer could get values like, if we are in an infinite loop, it doesn’t seem like a valid use case. + +DMF: If it’s calling that next synchronously in an infinite loop, the next handler would run and the next handler would abort the subscription after it receives a certain number of values it wants. It could be waiting for one of the values and then once it finally receives it, it aborts the subscription and the user can constantly check this gives the way to tell the producer to stop producing values even if it produces it synchronously. + +JRL: The subscription’s `next` callback is being invoked immediately during the `subscriber.next` () callback. + +DMF: Yes. + +JRL: Okay. If we were to yield the task thread one tick, before calling the producer’s producing function, does that solve the same case without forcing us to have the AbortSignal separately? + +DMF: It does, but then it produces—we discussed this a bit. It produces a tricky situation where like sometimes the producers—like, yeah. You are saying, we don’t call the producer callback synchronously during `subscribe`? You are saying there’s a microtask gap or something. + +JRL: Right. + +DMF: We discussed that and we rejected it for reasons that are not inside my head right now. There’s discussion about this, though. On one of the issues. I can try and pull it up and put it in some notes or something. But this was discussed in part of the ref producer discussion. And yeah. So I think maybe… yeah. I don’t have all the context in my head, but we did discuss this and decided not to go that path. + +DE: We in TC39 had trouble working with the web platform in the past on ergonomics. For example, with Temporal, we designed Temporal to work well with having types that model different things in the DOM. And the principle that so far that we have gotten feedback from WHATWG, we could add things to the DOM later, post initially shipping, if that makes sense, if Temporal proves itself out. Betting against each other. I am wondering how this will work when applied to, for example, we add more iterator helper methods, then we add them at the same time to Observables? Or would we just kind of see if it works out shipping one and adding to the other? When we make changes at the TC39 level, we try to make them coordinated at the same time. But even for older features like Promises: when promises were created, there was the idea to have a `.loaded` accessor that would return a Promise, and that never happened. I am wondering how we should work together across the venues on these ergonomic issues, now that DOM is getting into those? + +DMF: I mean, it would be my intention to keep the set of like Iterator/AsyncIterator helpers up to date and with the Observable operators. I don’t have a particular reason or appetite to hold off on one that makes sense. Just to see if it kind of goes well in TC39 land. So I would like to keep these pretty up to date and pretty synchronized when possible. Which is kind of why we started with that initial list right off the bat and really didn’t deviate from it. Because you know, there’s some operators that you could do without [?] observables, but we felt the consistency between the helpers was important. So yeah. It would be my intention to keep them up to date and have enough cross-talk between the orgs to kind of synchronize the introduction of those changes. Does that answer the question at all? + +DE: Yeah. That sounds perfect. + +DMF: Cool. + +CDA: MF? + +MF: Yeah. This is a—I generally, I am in favor of doing this work across venues. When developing iterator helpers, we might not be taking into account the needs of observables and in our design there, I think that we may fail to account for something that is important. So I do hope that even if it is being developed in a separate venue that we keep in communication on those topics in particular. So that we don’t forget to be involved and be involved in the process early and don’t forget to take into account your needs. + +DMF: That’s perfect. I am glad we are on the same page on that. + +DE: So you described the proposal as done. How does TC39 could make itself a more attractive venue for discussing proposals even across venues before they are done? + +DMF: So I think—for all web APIs or things that have started incubation in TC39? + +DE: We wouldn’t be interested in bringing every web API here. Only ones that have clear overlap. This one had a particular history; people here were interested in it. + +DMF: I think yeah. I think your question applies to the ones that you mentioned have some significant overlap or some history in TC39. I think I probably should have just like—we were—we got the feedback to shop this around to TC39 a little later than I would have liked. And I tried to do this, I think it was December or October and it didn’t work out last minute. So I am presenting this rather late. How to make TC39 an attractive venue? The rare proposal that has this larger overlap, should be probably earlier. I don't think there’s anything about TC39 that was off-putting or unwelcoming. I feel like we didn’t consider it early enough probably. And the fact it was moving—I don’t have much experience with TC39, the process and it was moving out of the TC39 was, in my head, you know, we discussed with SYG and okay. Do it and do it over here. I think some earlier cross-talk would have been better. I mean, yeah. I don’t know. Maybe just informing editors in both groups that we should talk more is the best thing. But that’s the lesson I learned from this, at least. + +DE: That makes sense. Even if nothing comes to mind now, if there’s something offputting about the group or anything in the organization you can think of later, I want to figure out how to address it at the TC39 level. + +SYG: Let me jump in a little bit here. I think reputationally TC39 could improve and I see proposals that tried to move in that direction, though there was strong disagreement from both sides from MLS procedural for consensus processing improvement yesterday. Those are the proposals that would move TC39’s reputation for its process and deliberation to be more welcoming for web proposals. + +CDA: And on that note, we are at time. So thank you, Dominic. Thanks, everyone. Great discussion. + +DMF: Thank you so much. Appreciated. + +### Speaker's Summary of Key Points + +- DMF presented WHATWG Observables for general feedback to the TC39 plenary. +- DMF outlined the major design decisions made over the past 6 months or so, and asked if there are any general thoughts or big concerns. +- The proposal was originally a TC39 Stage 1 proposal in 2015, Observables moved to WICG/WHATWG to integrate more closely with DOM Events. +- There was some discussion about the history behind the original authors of the TC39 Observables proposal. +- It has been implemented in Chrome and partially implemented in [WebKit but wished first for feedback from TC39](https://github.com/WebKit/standards-positions/issues/292#issuecomment-2682983190). +- The Committee raised questions on cancellation design, observable/iterator/Signal interoperability, and possible developer confusion between observables and Signals, but Committee reception was largely positive. +- Feedback emphasized the need to have good developer messaging, to help the community understand the complementary and distinct use cases for Signals and Observables, and for ergonomic APIs that allow easy conversion/interoperation of data from Signals to Observables and vice versa. +- There was discussion about: +- How to maintain a positive relationship between WHATWG and TC39. +- How to encourage more cross-venue discussion for future relevant APIs earlier in the process. +- How to improve TC39’s reputation and make it more welcoming for relevant web proposals. + +### Conclusion + +- Positive overall feedback. +- Discussion about how to increase early collaboration between WHATWG and TC39. + +## Continuation: Normative: Mark sync module evaluation promise as handled (#3535) + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/ecma262/pull/3535) +- [slides](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU) + +NRO: So this was presented on Monday, I believe, it was blocked because there was some confusion. The problem specifically was this HostPromiseRejectionTracker hook exposes some promises to the host that are not exposed to JavaScript code and the promises were internal and the host hook would expose them. I went through the various promises that got rejected in the stack (spec?) and there are a bunch of internal promises that get exposed to the host hook. Like, the ones from missing from—these are internal and bunch of promises in the module stuff that—but this is one way to expose. And I don’t—I talked with MAH and I solved this, but it seemed to be fine. There’s not an issue but it does not make it worse. Is that correct, MAH? + +MAH: Yeah. I think from what I understood, this doesn’t make it worse. It actually makes it better. In the sense that, like, most hosts will not synchronously notify—give user-land the ability to interact with an unhandled rejection. They will usually queue that up until the promise queue drains and then fire events or callbacks or whatever mechanism the host has. And what I understand about this change, is that it basically makes the promise handled before draining the queue in this case, and so this guarantees effectively that the host would not—if the host has that behavior, it would not expose that internal promise to the—to any user callback. + +MAH: There might be other places in the spec where we’re creating promises that are assumed to only appear internally and that may end up being exposed to user code through the rejection mechanism. However, as NRO mentioned, that’s probably something we should more holistically review and see if there’s anything we want or should do about it. + +MAH: My main concern and the reason I raised in the first place, when you expose the promise object that is meant to be internal, to userland, userland can go and modify the promise object, and given the way promise works and I am not convinced that couldn’t interfere with the host or spec implementation later trying to observe the result, resolution of the promise, and somehow cause some synchronous reentrancy. We have talked about this before in committee. And we need to be more careful with how we handle promises. And until we have a way for spec or host code to safely handle promises without—while guarding itself from potential user code interference and causing reentrancy, we don’t want to create these situations in the first place. + +NRO: Yeah. All of that matches my understanding. So yeah. I would like us to—I want to know we are work zeroing in on that. We should only tell the host the promise it was handled if it was wasn’t handled before. But yeah. We will review this again. So do we have consensus now for this normative change? + +MAH: I am definitely—I think MM held consensus for me. + +MM: Yeah. I was convinced, I held consensus specifically for MAH so you have support from Agoric. + +NRO: Thank you. We have consensus now. + +### Speaker's Summary of Key Points + +- NRO and MAH discussed MAH’s concern about spec-internal promises being exposed to user code through host hooks. +- In fact, the proposed change would reduce the risk of spec-internal promises being exposed to developers. +- There might be other places in the specification that creates promises that are assumed to only be used internally and could potentially be exposed to user code through the rejection mechanism. This will need to be reviewed holistically further in the future. +- MAH and MM are no longer concerned about the proposal. + +### Conclusion + +- Positive consensus for the pull request. + +## Continuation: Reviewers for Export Defer + +NRO: I still need reviewers for expert defer— + +CDA: The next topic, who would like to review export defer Stage 2? + +USA: I can offer to help out. + +NRO: you are a colleague of mine + +USA: Yeah. To help out with the technical stuff, doing the review. + +CDA: Looking for two stage 2 reviewers for export defer. + +CZW: I can help with reviewing. + +CDA: Chengzhong. Yes. Looking for one more. + +NRO: I will review one of the proposals back. + +CDA: You got quid pro quo offer from Nicolo reviewing. Can we get one individual to help review export defer spec? + +ACE: I will review it. + +CDA: All right. Thank you, Ashley. + +CDA: MM asks to confirm that non-extensible got 2.7. I believe that is true. + +CDA: It was conditionally approved pending that review and then you got that. So it officially is 2.7. + +MM: Thank you + +### Speaker's Summary of Key Points + +- Stage 2 specification reviewers are required for the export defer proposal + +### Conclusion + +- CZW and ACE will review. USA will also help. + +## Plenary conclusion + +CDA: All right. With that, our next topic and technically lunch, but we have nothing else scheduled for the afternoon. This brings the 107th meeting of TC39 to a close. Thank you, everyone. And we will see you in the—the next one is coming up quick, in May. Yes + +USA: Yes, and please sign up for it. + +CDA: Are we still looking for people to volunteer to—for talks at the community event? + +USA: At the community event as well. Like, if you are even somewhat motivated to do this, please let me know, I will be happy to help out. Basically having an idea who would be available to do this or for a panel, for that matter, would be really helpful because it would help us set an agenda, start inviting people, put it up somewhere, so that people can sign up basically. Yes. + +MM: What about people who are attending only remotely? + +USA: I don’t believe we need to register, then. Like we can accommodate you + +MM: Is the nature of the community event such that someone who is attending only remotely, still appear at a community event + +USA: That’s a great point. Thank you. I hadn’t considered this, but I think it should be possible. I confirm this to you in person—or I mean, on DM. + +MM: Thank you. + +CDA: And another huge thanks to all the people that volunteered to help with the notes at this meeting. That would be ACE, ABO, BLY, CDA, DLM, EAO, JMN, JSC. CZW, DE, NRO, and SFC. Thank you so much. + +USA: Thanks, everyone. And thanks also to our transcriptionist.