Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add first-class support for differential script loading #4432

Open
mathiasbynens opened this issue Mar 13, 2019 · 83 comments
Open

Add first-class support for differential script loading #4432

mathiasbynens opened this issue Mar 13, 2019 · 83 comments
Labels
addition/proposal New features or enhancements topic: script

Comments

@mathiasbynens
Copy link
Member

mathiasbynens commented Mar 13, 2019

The type=module/nomodule pattern gave developers a “clean break” to ship small, modern JavaScript bundles (with minimal polyfilling + transpilation) vs. legacy bundles (with lots of polyfills + transpilated code), which is great not just for module adoption but also for web performance. However, as more features are added to the JavaScript language, more polyfilling and transpilation becomes necessary even for these “modern” type=module script bundles.

@kristoferbaxter, @philipwalton, and I have been thinking about ways to address this problem in a future-facing way, and have explored several potential solutions. One way we could introduce a new “clean break” once a year is by adding a new attribute to <script type="module">, perhaps syntax or srcset:

<script type="module"
        srcset="2018.mjs 2018, 2019.mjs 2019"
        src="2017.mjs"></script>
<script nomodule src="legacy.js"></script>

(Note that this is just an example of what a solution could look like.) The 2018 and 2019 descriptors would then refer to feature sets that browsers recognize (in particular, they do NOT refer to ECMAScript version numbers or anything like that). For more details, read our exploration doc.

At this stage we’d like to get feedback on whether others agree this is a problem worth solving. Feedback on any particular solution (such as <script type="module" srcset> vs. <script type="module" syntax> vs. something else) is also welcome, but less important at this time.

@mathiasbynens
Copy link
Member Author

@zcorpan zcorpan added addition/proposal New features or enhancements topic: script labels Mar 13, 2019
@domenic
Copy link
Member

domenic commented Mar 13, 2019

I'm -1 on this idea for the following reasons:

  • I think the user-agent string already gives the correct amount of information here. Any additional information given is a privacy leak, so this proposal must be strictly less powerful if it is not to violate privacy invariants. (For example, if a user changes their UA string, the browser would need to change what it reports for these values too, in order to not add more bits of entropy. The exploration doc seems to say this is not desired, in the "Differential JavaScript for user-agent Buckets" section, but I assume the intent was not to add more fingerprinting surface, so in fact there would be no difference.) As such it's best to stick with just one source of data.

  • Agreement on "yearly feature sets" is not tractable. For example, it'd be ideal to ship BigInt or private field code to today's Chrome, but this proposal would not allow doing so, because "the majority of stable user-agents" do not contain those. (Or do they? See below.) Tests should be more granular than bundling together features in this way.

  • Any definition of "majority of stable user agents" is not realistic. By some definitions, that would include exactly one user agent, the most-recently-stable Chrome version. By others, it would include Chrome and Safari, excluding Firefox. By others, it would include Chrome and Firefox, excluding Safari. (It's unclear how to count Edge given recent news.) In some geographic regions, it would include UC Browser or QQ browser. This isn't even mentioning the various Chromium-based browsers which are on less-than-latest-stable-Chrome versions. In the end, only app developers have a realistic idea of what features they want to use untranspiled, and how those features sit relative to the browsers they are targeting. They should make that determination on a per-feature/per-browser basis, not based on a committee agreement of what a year represents, or what the majority of stable user agents represent.

  • Script loading is complicated and has many entry points. The exploration doc tries to thread this through <script> and <link>, but misses (in roughly descending order of importance) new Worker(), import statements, import() expressions, service workers, the varied-and-growing worklet loading entry points, importScripts(), and javascript: URLs. A unified solution would involve the server taking responsibility for the scripts based on user agent, as can already be done today, instead of speccing, implementing, and waiting for wide availability of browser-side mechanisms such as the OP, and burdening all current and future script-loading entry points with the need to support this.

  • This attempts to bake in a division between the JavaScript platform and the web platform which I think we should discourage, not encourage.

As to whether this is a problem worth solving, it depends on what you mean. I agree it's a worthwhile thing to do for authors to serve browsers code based on the syntax and capabilities they support. I think that problem is already solvable with today's technology though.

@matthewp
Copy link

I like the general idea of differential loading but I don't think this solution is the right one. My main problem is surrounding how these yearly feature sets will be defined. I think it would be difficult to gain consensus on what is included.

I can also see a scenario where a Popular Website uses srcset and browsers feel pressure to lie about their support, knowing that Popular Website doesn't use feature Y (the thing they don't support) anyways.


I don't have a firm alternative, but I feel like some combination of import maps and top-level-await provide the primitives needed for differential loading. I could see a future feature of import maps that makes it a bit cleaner to do.

@mathiasbynens
Copy link
Member Author

Some initial responses:

I think that problem is already solvable with today's technology though.

It may be “solvable” through UA sniffing and differential serving, but in practice this approach somehow hasn’t gotten much traction. We commonly see websites shipping megabytes of unnecessary JavaScript. To apply the technique you describe, currently developers have to implement and maintain:

  • custom tooling configuration to output multiple separate JS bundles, and
  • custom server-side UA sniffing that maps exactly to the tooling configuration

If instead, we could somehow standardize on some idea of “feature sets”, then browsers and tooling could align around that, and reduce this friction altogether. Developers could then perform a one-off change to their build configuration and reap the benefits.

This attempts to bake in a division between the JavaScript platform and the web platform which I think we should discourage, not encourage.

Which division are you seeing? There’s no reason npm and Node.js couldn’t adopt the same “feature sets” we standardize on.

Script loading is complicated and has many entry points.

Why do other entry points such as dynamic import() or javascript: URLs need to be supported? The tooling that generates the output bundles would know whether import() is supported or not based on the feature set (e.g. 2019, or whatever kind of identifier we come up with) that was used to generate it. As such, the tool could decide whether or not to transpile/polyfill import() for that particular bundle.

I think it would be difficult to gain consensus on what is included.

It would depend on the chosen process. We can make this as complicated or as simple as we want. It could be as simple as just picking a date. The date then maps to a list of latest versions of stable browsers at that point in time. That list of browsers then maps to a set of features that are fully supported (by some heuristic, e.g. 100% Test262 pass rate for ECMAScript-specific features). There’s no point in arguing about which features should be included if we can just look at browser reality and figure it out from there.

@domenic
Copy link
Member

domenic commented Mar 13, 2019

If instead, we could somehow standardize on some idea of “feature sets”, then browsers and tooling could align around that, and reduce this friction altogether. Developers could then perform a one-off change to their build configuration and reap the benefits.

I don't think this alignment necessitates new browser features.

Which division are you seeing?

The proposal includes language features, but not web platform features.

Why do other entry points such as dynamic import() or javascript: URLs need to be supported?

Because they are other ways of loading scripts, and if the problem statement is differential script loading, then you need to ensure those also allow differential script loading.

It would depend on the chosen process. We can make this as complicated or as simple as we want. It could be as simple as just picking a date.

As I tried to point out, it is not that simple. A concept such as "latest versions of stable browsers" is itself super-fraught.

@mathiasbynens
Copy link
Member Author

Which division are you seeing?

The proposal includes language features, but not web platform features.

There’s no reason it cannot include web platform features.

@bkardell
Copy link
Contributor

Given all of about 15 minutes worth of thought I am a little hesitant to share a anything like a 'real' opinion here, but my gut reaction was kind of similar to what @domenic said except that I fall way short of

I don't think this alignment necessitates new browser features.

That's not to say "it does" either, just that that I also fully accept @mathiasbynens general premise what "can" technically be done doesn't seem to have caught on and is probably more challenging than it should be - but I don't know how to fix that either.

@jkrems
Copy link

jkrems commented Mar 13, 2019

FYI: In the node modules working group, we're currently exploring extending the existing import map alternatives pattern to support this kind of environment matching: jkrems/proposal-pkg-exports#29

@kristoferbaxter
Copy link

I think the user-agent string already gives the correct amount of information here.

The User-Agent is usable for many scenarios to provide a varied document or script response, but not all scenarios. For instance, within a Signed HTTP exchange, how would an author vary the response for either a document or subresource script resource based on the user-agent header? When hosting a simple static document, how would the document author vary a script source based on user-agent?

Additionally, User-Agent requires document authors to correctly parse and leverage the information within. There are efforts to reduce the complexity of this burden, but it's still not clear if they will happen. Allowing the User-Agent to provide a clear signal (via Syntax request header) and use the exact same logic on static documents would open this functionality up to a much larger audience.

This proposal attempts to provide a similar mechanism as srcset does for images, which could arguably be mostly redundant if a document author uses Client Hints.

Any additional information given is a privacy leak, so this proposal must be strictly less powerful if it is not to violate privacy invariants. (For example, if a user changes their UA string, the browser would need to change what it reports for these values too, in order to not add more bits of entropy. The exploration doc seems to say this is not desired, in the "Differential JavaScript for user-agent Buckets" section, but I assume the intent was not to add more fingerprinting surface, so in fact there would be no difference.) As such it's best to stick with just one source of data.

This is an interesting point, the intention is the syntax version would remain stable between browser versions, until a new version passed the set of defined tests and could change the version to the next revision. Similar to the Accept header, this value would change relatively infrequently and fully align with the reported User-Agent string changing. There is no scenario where the Syntax value would change outside of a User-Agent change. I'm struggling to understand where this adds additional bits of entropy. Perhaps we could use Accept as a similar request header for comparison?

Agreement on "yearly feature sets" is not tractable. For example, it'd be ideal to ship BigInt or private field code to today's Chrome, but this proposal would not allow doing so, because "the majority of stable user-agents" do not contain those. (Or do they? See below.) Tests should be more granular than bundling together features in this way.

This proposal doesn't attempt to reduce transpilation to zero for specific User-Agents. If a document author wanted to specifically ship code that worked in Chrome alone, they would want to use User-Agent parsing. The "yearly feature set" is a stake in the ground, a compromise between shipping the absolute latest version of syntax and transpiling everything to ES5.

Any definition of "majority of stable user agents" is not realistic. By some definitions, that would include exactly one user agent, the most-recently-stable Chrome version. By others, it would include Chrome and Safari, excluding Firefox. By others, it would include Chrome and Firefox, excluding Safari. (It's unclear how to count Edge given recent news.) In some geographic regions, it would include UC Browser or QQ browser.

A goal of this proposal is to reduce the complexity in safely shipping differential JavaScript. This would require browser vendors working with one another to establish the items included in each yearly revision. However, I and other Web Developers would hope this is achievable... the goal is to make documents use more of the code they were authored with. If a User-Agent doesn't pass the defined set of tests for a yearly revision, they should not report that version in the Syntax request header, nor use a corresponding value in a HTMLScriptElement.syntax attribute.

Script loading is complicated and has many entry points. The exploration doc tries to thread this through <script> and , but misses (in roughly descending order of importance) new Worker(), import statements, import() expressions, service workers, the varied-and-growing worklet loading entry points, importScripts(), and javascript: URLs.

All of the above items are addressable with support added to HTMLScriptElement, HTMLLinkElement, and the Syntax request header. The expectation is once a HTMLScriptElement chooses a syntax version, the resource it chose is responsible for leveraging the correct references to its dependencies (Workers, import statement, import expressions, service workers, importScripts() and javascript: URLs).

Would specifying the behaviour for these items independently (as done with HTMLScriptElement and HTMLLinkElement) address these concerns?

This attempts to bake in a division between the JavaScript platform and the web platform which I think we should discourage, not encourage.

Not intentional. This proposal starts with a smaller target than the entire web platform, but no division is intended.

@littledan
Copy link
Contributor

I agree it's a worthwhile thing to do for authors to serve browsers code based on the syntax and capabilities they support. I think that problem is already solvable with today's technology though.

There's more than one way to get at this sort of information; I wonder what you'd recommend. I like the idea of making the decision on the client side, as import maps does. I've heard it can be impractical to deploy UA testing in some scenarios.

If inefficient JavaScript is being served today, I'm wondering why. Is not efficient enough to do the tests? Are tools authors unaware of the technique? Is it impractical to deploy for some reason? I bet framework and bundler authors would have some relevant experience.

@daKmoR
Copy link

daKmoR commented Mar 13, 2019

that seems too ambitious of a solution... (agreeing on which feature are in which "group" seems to be way too hard... and it can fastly different on which technology you are using)
imho you will always need to have some logic like

if (this and that feature is supported) { 
  load(this); 
} else if (other and more is supported) {
  load(other); 
} else if (...) {}

imho it's about having a way of getting these checks auto-generated into your index.html by bundlers.

@jridgewell
Copy link

I think the user-agent string already gives the correct amount of information here. Any additional information given is a privacy leak, so this proposal must be strictly less powerful if it is not to violate privacy invariants... As such it's best to stick with just one source of data.

The userAgent string does provide a ton of information, but it's inscrutable. Browsers add additional text to fool UA sniffers, and adding any additional significance (trying to determine JS support) based on it is going to cause errors.

Agreement on "yearly feature sets" is not tractable. For example, it'd be ideal to ship BigInt or private field code to today's Chrome, but this proposal would not allow doing so, because "the majority of stable user-agents" do not contain those.

It's impossible to solve this with a lowest-common-denominator approach. So you can either ship (and maintain!) multiple highly-specialized builds to each browser, or you can ship and maintain LCD builds.

Having just a yearly LCD build seems like an excellent middle ground compared to compile-everything-to-es5 or every-browser-version-gets-its-own-build.

Any definition of "majority of stable user agents" is not realistic... They should make that determination on a per-feature/per-browser basis, not based on a committee agreement of what a year represents, or what the majority of stable user agents represent.

I agree. This is the most hand-wavey part of the design, and will probably make it more difficult for devs to determine what needs to be done to generate an LCD build.

But what if we change it a bit? Instead of making the browsers vendors (or any standards body) determine what needs to be natively supported for Chrome to include "2018" support, we make it the build year. Every Chrome/Safari/Firefox/X built in 2018 advertises "2018". The community can then decide what 2018 means in an LCD build.

Eg, Chrome XX shipped in 2018 and advertises "2018". Firefox YY shipped in 2018 and advertises "2018". We know YY supports some feature (say, Private Fields) that XX doesn't. So, we know that if we want to ship a 2018 build that all 2018 browsers can understand, we need to transpile Private Fields. If Chrome adds support for Private Fields in 2018, the transpile is still necessary, because the 2018 LCD doesn't support it. By the time 2019 rolls around, everything supports Private Fields, and we know we no longer need to transpile it in the 2019 LCD.

Script loading is complicated and has many entry points. The exploration doc tries to thread this through <script> and , but misses (in roughly descending order of importance) new Worker(), import statements, import() expressions, service workers, the varied-and-growing worklet loading entry points, importScripts(), and javascript: URLs.

The 2018 build should be responsible for only loading 2018 build files. The 2017 build should be responsible for only loading 2017 build files. What's needed is the way to load the build's entry point, not the way for the build to load other files from the same build.

@fchristant
Copy link

fchristant commented Mar 13, 2019

I very much like the idea at a conceptual level. In a way it is feature grouping. I believe that in today's browser landscape, most developers would conceptually divide the browsers they support in 2 levels, 3 at best.

I share the concern of others on how you would possibly define these levels in a democratic and neutral way, but I'm not pessimistic about it. For the simple reason that if it would be skewed to any particular interest or be highly opinionated, it still does not necessarily harm the ecosystem, as you could just not use it and keep using low level feature detection. So it seems a progressive enhancement to me.

I would imagine it as feature group detection, not just at the JS module level, also at CSS level and inline JS level. So anywhere in the code base you would be able to test for it (so also via @supports). This idea is wider in scope than the proposal, and would only work if all browsers have support for this type of testing, which may be a showstopper, I realise.

If feature grouping would be a thing, organisations can simply decide to support a "year" (or 2,3) instead of the infinite matrix of possible browsers, and the individual features they do or do not support. It could get rid of a whole lot of looking up what is supported, and a whole lot of low level feature detection. It would greatly simplify feature detection code and it would be far simpler to retire a level of support. Test for 3 things, not 60, to sum it up.

Another benefit as a side-effect: perhaps it would streamline coordination of feature delivery across browsers. Meaning, if browser 1 ships A yet browser 2 prioritizes B, feature A is not usable by developers without a lot of pain. A great example of coordinated delivery is of course CSS Grid support.

Whilst being dreamy, I might as well go completely off-track: being able to define such a feature group yourself, to bypass the problem of trying to define one for the world. It's inherently an opinionated thing. Don't take this one too serious though, I haven't considered implementation at all.

@Rich-Harris
Copy link

The problem might technically be solvable currently, but feature detection based on user agent strings runs counter to well-established best practices. It also puts the implementation burden on application developers rather than browser vendors.

@kristoferbaxter already raised this, but I think it's worth reiterating — a lot of sites are entirely static, and if anything frameworks are encouraging more sites to be built this way. That rules out differential loading based on user agent as a general solution.

So without speaking to the merits of this particular proposal, which others are better qualified to judge, it does address a real problem that doesn't currently have a solution.

@clydin
Copy link

clydin commented Mar 14, 2019

Conceptually and at a general level, a feature such as this will most definitely be valuable as the ECMAScript specification advances.

However, the use of the srcset concept makes several existing attributes ineffective/incompatible. A main one being security related (integrity). The developer should not be prevented from using existing attributes that provide a tangible security benefit to be able to leverage differential loading. Yes, they could be added to the srcset attribute but at what point does srcset become its own language and the originating concept of HTML being a markup language become lost. How many other attributes would need to be added now and in the future to maintain feature parity? The core of this issue is that the srcset concept violates the current precondition that a script element references a single resource. Also, the nomodule/type=module method has already set the stage for the use of multiple script elements to describe a script resource's variants.

As a further consideration, The picture/source concept may be more fitting than the srcset concept. In essence, there is one logical resource and one or more variants with rules on their use; all defined via markup and leveraging existing elements as much as possible. This is also relevant in regards to behavioral concerns wherein the former is intended to be explicit in regards to which resource should be used rather than the later's browser decides model. Displaying the wrong sized image may make the site look odd but executing the wrong script will cause the application to fail.

On the topic of the feature sets, the years already have a well defined meaning (i.e., they map directly to the ECMAScript specification). Creating a parallel definition will most likely lead to developer confusion and broken applications as the distinction would not be obvious. Unfortunately, using a different categorization system (levels, for instance), would essentially have the effect of creating an alternate specification. This could also lead to confusion and potential bifurcation of the standards process. Strict adherence to the specification versions may be the only long term viable and supportable option.

I think the main draw of a feature such as this would be to leverage more advanced syntactical capabilities which would provide greater potential for reduced code size and increased performance. At a minimum allowing for declarative feature detection of capabilities such as dynamic import or async iterators would be a boon.

@mathiasbynens
Copy link
Member Author

@clydin I agree Subresource Integrity should be supported somehow, eventually. I don't think lack of SRI support should block an initial version of this proposal to land (just like it didn't block import()). If we were to continue down the path of <script type=module srcset>, then ResponsiveImagesCG/picture-element#255 is the relevant discussion.

@robpalme
Copy link

A note on naming: could we call this Differentiated script loading rather than Differential?

The latter initially made me think this involved sending script patches over the wire.

@matthewp
Copy link

matthewp commented Mar 14, 2019

@kristoferbaxter

The expectation is once a HTMLScriptElement chooses a syntax version, the resource it chose is responsible for leveraging the correct references to its dependencies (Workers, import statement, import expressions, service workers, importScripts() and javascript: URLs).

This requires the script being external correct? What about inline scripts?

<script type="module">
 // What syntax am I?

 // What syntax is this worker?
 new Worker('./worker.js');
</script>

@iamakulov
Copy link

iamakulov commented Mar 14, 2019

To expand on the @daKmoR’s point (#4432 (comment)). What if we target features instead of years? Just like CSS does with @supports.

This might look like this:

<script
  src="app.bundled-transpiled.js"
  support="
    (async, dynamic-import) app.modern.js,
    (async, not dynamic-import) app.bundled.js
  "
></script>

Pros:

  • Easy to use in static HTML. And easy to generate with bundlers/other tools.

  • Gives enough independence to browser engines. This removes the burden of browser maintainers meeting every year and deciding what to include into each yearly snapshot.

  • More reliable. There’s a high chance Chrome and Firefox may ship slightly different implementations of the 2018 descriptor, and users won’t be able to rely on it. It’s way less likely if descriptors describe specific features and not feature packs.

  • Works well if a browser decides to revoke a feature (like it happened with SharedArrayBuffer). If a browser revokes a feature, it would just start returning false for the corresponding supports check. With 2018/2019/etc, browsers would have to bump the version (as described in the exploration doc).

Cons:

  • Requires a lot of work in the beginning to setup keywords for existing features. To reduce the work, the committee may use the existing list of features in the Kangax compat table. Further maintenance would be easier.

  • Verbose. This won’t create a real issue if the descriptor list is generated automatically (nobody would edit it, so verbosity won’t complicate anything). This might be an issue if the descriptor list is created by hand; but from my experience, in most apps, you typically just need to detect a couple key features (like async or import) and won’t care about others.

@clydin
Copy link

clydin commented Mar 14, 2019

While I agree on the utility of this feature and that getting it in the hands of developers sooner rather than later would be useful, I don't think it is prudent to make security related concerns an afterthought for a design that changes the semantics of code loading and execution.

The integrity attribute is also one of multiple current and future attributes that would potentially need to be added to the srcset syntax. srcset would most likely need to become a DSL (CSP like?) to fully encompass the feature set of the script element for each referenced resource. At which point the script element has essentially become duplicated in a different form. And although most likely not a major concern, tooling (parsers, static analyzers, generators) would need to add support for this new DSL as well.

As an alternative, what about a markup based solution? (naming/element usage for illustrative purposes):

<script type="differential"> <!-- maybe new element <scriptset>? -->
  <script type="module" syntax="2019" nonce="xxxxxxx">
    // I'm inline ES2019
  </script>
  <script type="module" syntax="2018" src="2018.js" integrity="sha384-xxxx" crossorigin="anonymous"></script>
  <script type="module" syntax="2017" src="2017.js" referrerpolicy="no-referrer"></script>
  <script nomodule src="legacy.js"></script>
</script>

Allows full reuse of the existing script element with semantics similar to picture (the first satisfying script element is used). This also allows for inline scripts. The script element with the syntax attribute could even potentially be used standalone. I think using an attribute name of ecma or standard would also be more explicit as to its purpose (assuming the threshold was specification compliance). The supports concept with individual feature flags from the above post could also be an additive (or replacement) capability in this scenario as well.

@keithamus
Copy link
Contributor

I don't think this is a problem worth solving.

On one hand I think it is easy enough to solve this for people who want to today; which I imagine to be a tiny fraction of developers; I imagine most folks will continue to use Babel as a compilation step, a huge portion of these folks will only output one target (probably whatever babel-preset-env gives them), the subset of users who do end up compiling to multiple targets are probably in single digit percentages, and probably have the engineering bandwidth to implement their own solutions in JS using feature detection with dynamic imports. I think it is reasonable enough for these folks to do something like the following:

if (feaureDetectEs2018()) {
  import('./index.2018.js')
} else if (featureDetectEs2017()) {
  import('./index.2017.js')
}

Perhaps effort would be better put into a supports style interface ala CSS @supports which can be given some kind of feature set - thereby meaning less work for a roll-your-own solution.

My second point which coincides with a few commenters here is that there really is no way of knowing what something like 2018 even means in terms of support. But I'm going to go a little further to illustrate with some concrete examples:

Issues like the above Edge bug lead me to my next major concern with this; what happens if bugs are discovered after the browser begins shipping support for this years syntax? What recourse do I have if Edge begins optimistically fetching my es2018 only to trip up on bugs it has? If I rolled my own loader (see code example above) I could mitigate this problem by adding more feature detection, what can I do with html attributes to prevent this?

@kristoferbaxter
Copy link

@kristoferbaxter

The expectation is once a HTMLScriptElement chooses a syntax version, the resource it chose is responsible for leveraging the correct references to its dependencies (Workers, import statement, import expressions, service workers, importScripts() and javascript: URLs).

This requires the script being external correct? What about inline scripts?

<script type="module">
 // What syntax am I?

 // What syntax is this worker?
 new Worker('./worker.js');
</script>

Quite a good point. The value for supported syntax being available for script would be a possibility. I'll spend some time thinking about this further.

@littledan
Copy link
Contributor

If folks are interested in more granular feature testing, in the style of @supports, I'm wondering if it might make sense to do something based on import maps.

@nicolo-ribaudo
Copy link
Contributor

nicolo-ribaudo commented Mar 14, 2019

Issues like the above Edge bug lead me to my next major concern with this; what happens if bugs are discovered after the browser begins shipping support for this years syntax? What recourse do I have if Edge begins optimistically fetching my es2018 only to trip up on bugs it has? If I rolled my own loader (see code example above) I could mitigate this problem by adding more feature detection, what can I do with html attributes to prevent this?

I also have this concern. For this reason, I think that 2018 should only mean "this browser version has been released in 2018" and not "this browser supports es2018": an engine can never be 100% sure that they are correctly implementing every edge case, and "I support es2018" may be a false claim without the browser knowing it.

Using @babel/preset-env we can easily transpile code down to what was supported in 2018, while a browsers telling us that they think they support es2018 doesn't let us know exactly what we should transpile.

@theKashey
Copy link

If scripts would be in different files(file name patterns) - it would be a pain to import or create Workers. With code splitting in mind, it would be also a pain to create N "full" bundles, without any name intersection. And just very slow.

If scripts would be in different directories - that would solve some problems - webpack and parcel supports publicPath out of the box, and esm just supports relative imports as well.

<script type="module"
        srcset="2018/index.js 2018, 2019/index.js 2019"  
        src="2017/index.js"></script>
<script nomodule src="index.js"></script>

^ it's the same index.js for all the cases, the same names and the same structure.

Plus - it's much easier to generate these directories - just create the first bundle, keeping all language features, and then transpile a whole directory to a lower syntax, which could be done much faster and safer. Here is a proof of concept.

@jridgewell
Copy link

Perhaps effort would be better put into a supports style interface ala CSS @supports which can be given some kind of feature set - thereby meaning less work for a roll-your-own solution.

I think CSS @supports is actually too granular. Ie, are we going to ship every permutation of (x, y, z) to browsers to hit the optimal path for all of them?

And even if we make it less granular (@supports 'es2017'), we hit path of bugs. Safari had a broken async/await implementation in 11. Now it has a broken tagged template literal implementation in 12. But I'd imagine they're still going to advertise es2017 support, and they certainly aren't going to ship a new browser patch version to disable es2017 support.

Tying this to a specific set-in-stone supports list is the wrong way to approach this. Instead, we need a way to easily group browsers into a category, and let the community decided what is supported by that category. The category should be granular enough that we can get reasonable "clean breaks" in feature support (eg, how module implies a ton of ES6 support), but not so granular that it is valuable for fingerprinting.

That's why I think browser's build-year is an excellent compromise. Having a full year in a single category means there's not much information to fingerprint (even Safari privacy-stance is allowing the OS version in the userAgent, which roughly corresponds to a yearly release cycle). And if we find out that a browser has a broken implementation of a feature, the community can adjust what level of support (both ES and web platform features!) the build-year implies.

Plus, it'll be soooo damn easy for Babel to spit out "2018", "2019", "2020" builds using @babel/preset-env. This is like a "build it and they will come" moment. There may not be many people taking advantage of this now (through either module/nomodule break or userAgent sniffing), but if we add a feature that allows it to happen easily then we can teach it to everyone as the best way to ship less code.

@domenic
Copy link
Member

domenic commented Mar 15, 2019

I'd definitely support Babel or other parts of the tooling ecosystem work on producing babel-preset-env configurations based on browser release year. Then someone could invest in the server-side tooling to find the release year in the UA string and serve up the appropriate babel output. That makes sense as the sort of drop-in configuration change being proposed here, and best yet, it works with existing browser code so you can use it today in all situations.

@mikesherov
Copy link

mikesherov commented Mar 17, 2019

Sorry, thought I was done w this thread, but dinner gave me more to think about. Let me assume the principle behind this proposal is sound (after all, @jridgewell certainly has sound, convincing arguments), are there better solves based on this premise?

Case #1: FF releases in January 2019, without a feature that would raise the lowest common denominator. In Mar 2019, they release a feature that does raise the LCD. Because the 1/19 version of Firefox will load a “2019” bundle, all browsers miss out on untranspiling that feature until 2020 when it is finally unambiguously supported by browsers released in that 2020. Wouldnt YYYY-MM-DD format be preferred?

Case #2: new fangled browser enters the market, and doesn’t support the current LCD of 2019. Do we expect tool authors to downlevel the meaning of 2019? Or do we pressure new-fangled browser to advertise a lower release year? It seems there is still a use case in which a browser might want to lie about support to stifle competition?

@theKashey
Copy link

That does not make a sense. So - it's 2020 New Year Eve, and just after fireworks... I should go and reconfigure and redeploy all the sites? And test the result for sure. What I will get from it?

  • That was the point? - Ship less code, and ship code which is faster. No more, no less.
  • That is the problem? - We are shipping one common denominator - ES5 for all the modern "2%" browsers + IE11.
  • That is the real problem? - We are not distinguishing "modern" browsers from "all possible used".
  • Would the original proposal solve the problem? - Mmmm... Yes.
  • How would it solve it? By distinguishing bundles per possible browser "branches".
  • What are the proposed "browser branches"? - 2005, 2015, 2025.
  • What are real "browser branches"? - Branches. IE, Blink, React.Native. Dead and Alive.
  • What if create clusters of "browser branches"? - Old(IE), specific(React.Native), modern(all), bleeding-edge(also all).

So:

  • we need only lowers common denominator, proven to work everywhere - it's ES5
  • we need optimized builds for specific platforms, like React.Native, which could not be used in "web". So - skip it.
  • we need a bundle for "modern" browsers, and "modern" browsers usually update frequently - at least once a year.
  • we might need a bleeding edge bundle for the super hot stuff, available in nightly releases, and broken in Safari. But.. do we actually need it?

Look like it still just two bundles to ship - IE11 and ESModules targets. Fine tuning for a specific language/browser version is a cool feature, but it is actually needed? Would it help ship smaller bundles? Valuable smaller. Would it make the code faster? Valuable faster?

That is the question to answer - would fine grain tuning and polyfills neat picking solve anything. Or it might be a bit more coarse. We are not fighting here for kilobytes, we have to fight megabytes.

@MorrisJohns
Copy link

MorrisJohns commented Mar 17, 2019

How many years have we been trying to use feature detection, and avoid versions? Year or ECMA edition is just another version number.

If feature detection is needed for scripts, then it should be modelled upon the media attribute CSS equivalent for this e.g. <link href="mobile.css" rel="stylesheet" media="screen and (max-width: 600px)"> and window.matchMedia() if queried from script.

I would hope we don't introduce yet another mini-language for detecting script features.

Also note that Firefox used to have a similar <script language="javascript" type="application/javascript;version=1.7"> feature: https://bugzilla.mozilla.org/show_bug.cgi?id=1428745

(Aside: maybe the CSS media query would be useful for script loading too - one major distinction in our bundles is small screen. If I could bundle mouse and touch support separately, I probably would)

@theKashey
Copy link

theKashey commented Mar 18, 2019

Declarative feature detection - is a partially dead end. Combinatorial explosion - there is no way you will pre-create 100500 bundles you might decide to load. Every new condition is doubling the existing bundles count.

So client-side feature detection?

A big problem of feature detection is a location of such detection - you have to ship feature detection code to the browser at first, next you might load code you need.

  • get an initial bundle, with import maps for example
  • detect all features/modules you need
  • load the code. "When?"

The problem - actual code load is deferred by a whole time needed to load, parse and execute an initial bundle. For me in Australia and servers usually located in the USA (plus some extra latency from a mobile network) - that would be at least 500ms.
500ms on 4G is enough to download a few megabytes, while feature detection might strip much less. Ie it might make things even worse.

PS: We were using, and still using a client-side feature detection on yandex.maps - just check the network tab and notice the lag between map.js(modules table) and combine.js(modules data) - it might be a big problem for first-time customers without modules table caches.

@gibson042
Copy link
Contributor

There's nothing stopping user agents from downloading and even starting to parse files that might never be needed (and in fact preloading is already similar in this respect). If use is contingent upon JavaScript evaluation, as I believe makes sense, then they can make an educated guess and confirm once the page catches up (which in the case of thoughtful authors using simple tests delivered inline and preceding other downloads would be practically immediately).

@littledan
Copy link
Contributor

Lots of fascinating discussion above. I'm not sure what the right combination is between coarse presets, fine-grained declarative feature tests, imperative feature tests in JavaScript, or UA sniffing, but I think we can figure this out.

At a high level, I'm wondering, should the discussion about this proposal and about import maps somehow be connected? These are both about feature testing and deciding which code to load. What is it about built-in modules that make them make more sense to check individually, whereas JavaScript syntax features would be checked by year of browser publication?

@andraz
Copy link

andraz commented Mar 18, 2019

my addition to the proposal:

Add (dynamically calculated when page is served) hash of the source file as a srchash= parameter.

Browser should calculate this hash for all cached files when it saves them.

When cached hash matches with srchash, library (for example jQuery) which would get loaded from different src uri, can now be reused on the new site, without downloading it.

This would in in practice merge the performance of all CDNs worldwide.

@mathiasbynens
Copy link
Member Author

@andraz That seems like a separate, orthogonal proposal.

@littledan
Copy link
Contributor

@andraz Unfortunately, sharing caches between origins leads to privacy issues, letting one origin get information about which things have been hit by another origin (through measuring timing). I'm not sure where to find a good link for this, but it's why some browsers use a so-called double-keyed cache.

@jridgewell
Copy link

And yearly signifiers raises the question of is that granular enough? Perhaps quarterly, to match browser release cycles more closely?

That's one possibility, but the more granular we get to the more valuable it is as a fingerprint. Yearly arbitrarily seemed good enough.

I personally wouldn’t tag a ~10ms (on 4x CPU slowdown) as a serious performance, but assuming that is a serious perf hit, have we considered the equivalent perf hit on payload bloat from overdelivering transpiled code to browsers that don’t need it?

The 10ms was just two syntax tests to see if they'd throw or not. It rose to 18ms when inspecting userAgent for the template (not even adding any tagged template tests themselves, just the UA sniff). And note, this is ms blocking HTML parse and render, and blocking the start of a high-priority download.

I can definitely see delivering over-transpiled code as a negative. But this code is parsed and complied off-thread, and so it won't block the initial paint. I'd personally prioritize minimizing the first paint with over-transpiled code rather than delaying first paint to decide on the perfect bundle. I can't back this up, but maybe even time to first interactive will be faster, since a declarative approach won't block the start of the request (trade off being start of request vs smaller parse).

FF releases in January 2019, without a feature that would raise the lowest common denominator... Because the 1/19 version of Firefox will load a “2019” bundle, all browsers miss out on untranspiling that feature until 2020 when it is finally unambiguously supported by browsers released in that 2020.

Yes, this is the biggest trade-off we'll have to make. But I see this as being worth it for the chance to ship any new syntax at all. Right now, the easiest I break between old and new code is just the module/nomodule detection.

(And to make it clear, I would still feel this way even if Firefox shipped that new feature in February, after not having it in a January release)

new fangled browser enters the market, and doesn’t support the current LCD of 2019. Do we expect tool authors to downlevel the meaning of 2019?

Thinking about this, I'd equate it to "what if a new browser shipped with module/nomodule, but without any other ES6 syntax". I'm not sure I would start transpiling my module build down to ES5+module imports. As the years progress, the current LCD becomes par for the course. If a new browser doesn't meet that, they risk developers choosing not to support them.


There's nothing stopping user agents from downloading and even starting to parse files that might never be needed... If use is contingent upon JavaScript evaluation, as I believe makes sense, then they can make an educated guess and confirm once the page catches up

Wouldn't this double/triple/quadruple the amount of JS downloaded? Taking FB as an example, it's bundle is already 140kb. Even taking into account that smaller bundle sizes, I'd imagine we'd be downloading 400kb, 200-300kb of which would be inert? That seems bad, especially for users with low bandwidth.


At a high level, I'm wondering, should the discussion about this proposal and about import maps somehow be connected? What is it about built-in modules that make them make more sense to check individually, whereas JavaScript syntax features would be checked by year of browser publication?

I feel like import maps is such a generic underlying tech that it could do both yearly-LCD and feature-tests relatively easily. 😃

I would be fine if we didn't add syntax/srcset/whatever-attr-name-X to <script>s and instead just left this to import maps to decide for us. Just the discussion of differentiated builds happening is exciting.

@jridgewell
Copy link

jridgewell commented Mar 18, 2019

Wouldnt YYYY-MM-DD format be preferred?

Another thought I had is about the entropy of this. Lower entropy translates pretty easily into higher cacheability. One of the explicit reasons I can't use the User-Agent header is because its entropy is too high! Google's edge-cache infrastructure (and I'd imagine other intermediate caches) won't even touch a response with Vary: User-Agent.

But, something simple like a Browser-Year: 2019 header? I could easily vary on that, allowing us to push file selection into the server's responsibility instead of the browser's. If we make it more granular like 2019-03 or 2019-03-01, these responses start to lose cache hits (I'd imagine every browser-version would get its own cache key). But this is all a hypothetical "what if we went with a header instead?".

@mikesherov
Copy link

mikesherov commented Mar 18, 2019

Using YYYY-MM-DD is less fingerprintable than UA. In fact, it’s a derived attribute of UA... all you need is a server side map of UA to release date.

Re: cache hit rate, YYYY-MM-DD would be equally cacheable to YYYY, depending on what you specify in the script tag. This is the first time you mentioned Vary and a request header... which is an interesting thought, but my comments apply to this proposal about mapping a year to a url, not varying server side based on a header.

My point about YYYY-MM-DD was that you’d say <script srcset=“>2019-01-01 and <2019-03-01 jan2019.js, >2019-03-01 bleedingEdge.js, <2019-01-01 old.js”>

@jridgewell
Copy link

Using YYYY-MM-DD is less fingerprintable than UA. In fact, it’s a derived attribute of UA... all you need is a server side map of UA to release date.

For now, but I imagine that might change. Safari originally intended on permanently freezing the UA string. They allowed it to change based on the OS's version in https://bugs.webkit.org/show_bug.cgi?id=182629#c6, mainly to allow this exact "ship newer JS" feature. If a less granular option to accomplish that were made, they may reconsider a permanent freeze.

This is the first time you mentioned Vary and a request header... which is an interesting thought

I originally mention it in #4432 (comment), on why server-side varying based on User-Agent won't work. I hadn't mentioned a browser year header yet, but it was one of the things I considered when making that comment. It's just one of several browser-side implementations that allow LCD builds (srcset, import maps, now headers).

My point about YYYY-MM-DD was that you’d say <script srcset=">2019-01-01 and <2019-03-01 jan2019.js, >2019-03-01 bleedingEdge.js, <2019-01-01 old.js">

I think both of these ways has merit. If we decided YYYY-MM-DD, I'd be perfectly happy.

@gibson042
Copy link
Contributor

Wouldn't this double/triple/quadruple the amount of JS downloaded? Taking FB as an example, it's bundle is already 140kb. Even taking into account that smaller bundle sizes, I'd imagine we'd be downloading 400kb, 200-300kb of which would be inert? That seems bad, especially for users with low bandwidth.

You omitted the second part of that comment: "…which in the case of thoughtful authors using simple tests delivered inline and preceding other downloads would be practically immediately". User agents are in an ideal position to make the best tradeoff between delaying progress vs. downloading too much, but neither of those are necessary at all unless they guess wrong, and that will only happen when page content mucks with the environment or employs complex tests (both of which authors are disincentivized to do). What browser would download the wrong file from a block like this?

<scriptchoice>
    <script when="[].flat" type="module" src="2019.mjs"></script>
    <script when="Object.entries" type="module" src="2018.mjs"></script>
    <script type="module" src="2017.mjs"></script>
</scriptchoice>
<script nochoice src="legacy.js"></script>

We could even specify evaluation of each condition in its own realm, guaranteeing default primordials but at the expense of bindings available in the outer realm—which honestly might be worthwhile even if it would result in ugliness like

<!-- As of version 12, Safari supports ES2018 except regular expression lookbehind. -->
<script when="/(?<=a)a/" type="module" src="2018.mjs"></script>
<script when="/./s" type="module" src="2018-nolookbehind.mjs"></script>

@StephenFluin
Copy link

The Angular team is watching capabilities and discussions like this closely, we'd love continue our work to ship only the JavaScript each user needs. If this becomes a standard, we'd love to implement this.

This is awesome!

@Jxck
Copy link

Jxck commented Mar 30, 2019

Hi, this seems Versioning a Web and browser strongly care about it.
(of course currently TC39 versioning it, but don't care the version number in implementation.)

  1. missing feature in old spec
    new version number is not superset of old version in implementation.
    in the future, one browser doesn't implement spec X in es2020, but implement all spec in es2021.
    that browser supports 2021 ? or never support >2019 ?
    (ex, tail call optimization is implemented by safari only. but other browser go forward.)

  2. versioning problem
    if this proposal will be in browser, I wonder this will cause versioning problem in TC39 because of implementers affairs.
    same problem happened in W3C versioning of HTML, and why WHATWG begun and starts living standard.

personally, it's seems better to use feature base detection for each spec not a version number.

@Jamesernator
Copy link

Jamesernator commented Apr 18, 2019

I don't think the browser bug issue matters hugely given that the only way around that is to:

  1. Know about all the bugs and write feature tests for them
  2. Run those feature tests on all browsers you want

Unless you're gonna ship the whole ecma262-tests/wpt-tests suites (don't!) for whatever features you use I don't think feature testing is going to have a lot more value to you than just working-around the bug temporarily.


With that in mind I actually think a combination of extremely granular features and instead sending a request with just those capabilities that are not supported might work.

As a concrete example, I include on the <script> tag a list of features that I use that are not included in the baseline based on syntax features used in my non-transpiled piece of code (you might even be aware of correlations between these features and skip redundant ones):

<script type="module" features="asyncIteration regexpLookbehind generators asyncGenerators" src="script.mjs"></script>

Now when the browser sees this script it looks at the features it doesn't support and sends that as a list with the request. When the server receives this list it can perform any logic it wants to determine the best thing to send back.

This would work relatively well in a world of constantly updating browsers as the set of features between your best case and worst case are likely to be small and fluctuating rather than ever growing. For really old browsers you may just want to use <script nomodule> to avoid having an arbitrarily growing feature difference between best and worst.

@cameron-martin
Copy link

@jridgewell

Eg, Google's server infra flat out ignores caching on any response with Vary: User-Agent.

In what situations do people rely on google's caching?

Private caching could work for some, but I doubt the AMP SREs would allow me to sacrifice edge caching to get this feature.

I don't quite understand this, since AMP doesn't allow you to load arbitrary scripts.

@jridgewell
Copy link

jridgewell commented Jun 20, 2019

In what situations do people rely on google's caching?

Well, I work for Google. I'm interested in solutions that will for everyone, including my employer.

But focusing specifically on my Google example is missing the point. The UA header has extremely high entropy. If you Vary on it, you're effectively making the response un-cacheable. That's not Google specific.

but I doubt the AMP SREs would allow me to sacrifice edge caching to get this feature.

since AMP doesn't allow you to load arbitrary scripts

Something has to serve https://cdn.ampproject.org/v0.js. That's Google's serving infrastructure, and when you try to load that, you're requesting from our edge cache.

@cameron-martin
Copy link

@jridgewell gotcha. Your post makes a lot more sense now that I know you work for Google.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addition/proposal New features or enhancements topic: script
Development

No branches or pull requests