Announcing TypeScript 4.0 Beta

Daniel Rosenwasser

Today we’re excited to release the beta of the next major milestone in the TypeScript programming language: TypeScript 4.0.

This beta takes us on our first step into TypeScript 4.0, and while it brings a new major version, don’t fret – there are no substantially larger breaking changes than usual. Our philosophy in evolving TypeScript has always been to provide an upgrade path that minimizes disruptive breaking changes while still giving ourselves some flexibility to flag suspicious code as errors when appropriate. For this reason, we’re continuing with a similar versioning model to that of past releases, so 4.0 is just the natural continuation from TypeScript 3.9.

To get started using the beta, you can get it through NuGet, or use npm with the following command:

npm install typescript@beta

You can also get editor support by

Now let’s take a look at what’s in store for TypeScript 4.0!

Variadic Tuple Types

Consider a function in JavaScript called concat, that takes two array or tuple types, and concatenates them together as a new array.

function concat(arr1, arr2) {
    return [...arr1, ...arr2];
}

Also consider tail, that takes an array or tuple, and returns all elements but the first.

function tail(arg) {
    const [_, ...result] = arg;
    return result
}

How would we type either of these in TypeScript?

For concat, the only valid thing we could do in older versions of the language was to try and write some overloads.

function concat<>(arr1: [], arr2: []): [A];
function concat<A>(arr1: [A], arr2: []): [A];
function concat<A, B>(arr1: [A, B], arr2: []): [A, B];
function concat<A, B, C>(arr1: [A, B, C], arr2: []): [A, B, C];
function concat<A, B, C, D>(arr1: [A, B, C, D], arr2: []): [A, B, C, D];
function concat<A, B, C, D, E>(arr1: [A, B, C, D, E], arr2: []): [A, B, C, D, E];
function concat<A, B, C, D, E, F>(arr1: [A, B, C, D, E, F], arr2: []): [A, B, C, D, E, F];)

Uh…okay, that’s…seven overloads for when the second array is always empty. Let’s add some for when arr2 has one argument.

function concat<A2>(arr1: [], arr2: [A2]): [A2];
function concat<A1, A2>(arr1: [A1], arr2: [A2]): [A1, A2];
function concat<A1, B1, A2>(arr1: [A1, B1], arr2: [A2]): [A1, B1, A2];
function concat<A1, B1, C1, A2>(arr1: [A1, B1, C1], arr2: [A2]): [A1, B1, C1, A2];
function concat<A1, B1, C1, D1, A2>(arr1: [A1, B1, C1, D1], arr2: [A2]): [A1, B1, C1, D1, A2];
function concat<A1, B1, C1, D1, E1, A2>(arr1: [A1, B1, C1, D1, E1], arr2: [A2]): [A1, B1, C1, D1, E1, A2];
function concat<A1, B1, C1, D1, E1, F1, A2>(arr1: [A1, B1, C1, D1, E1, F1], arr2: [A2]): [A1, B1, C1, D1, E1, F1, A2];

We hope it’s clear that this is getting unreasonable. Unfortunately you’d also end up with the same sorts of issues typing a function like tail.

This is another case of what we like to call “death by a thousand overloads”, and it doesn’t even solve the problem generally. It only gives correct types for as many overloads as we care to write. If we wanted to make a catch-all case, we’d need an overload like the following:

function concat<T, U>(arr1: T[], arr2, U[]): Array<U>;

But that signature doesn’t encode anything about the lengths of the input, or the order of the elements, when using tuples.

TypeScript 4.0 brings two fundamental changes, along with inference improvements, to make typing these possible.

The first change is that spreads in tuple type syntax can now be generic. This means that we can represent higher-order operations on tuples and arrays even when we don’t know the actual types we’re operating over. When generic spreads are instantiated (or, replaced with a real type) in these tuple types, they can produce other sets of array and tuple types.

For example, that means we can type function like tail, without our “death by a thousand overloads” issue.

function tail<T extends any[]>(arr: readonly [any, ...T]) {
    const [_ignored, ...rest] = arr;
    return rest;
}

const myTuple = [1, 2, 3, 4] as const;
const myArray = ["hello", "world"];

// type [2, 3, 4]
const r1 = tail(myTuple);

// type [2, 3, ...string[]]
const r2 = tail([...myTuple, ...myArray] as const);

The second change is that spread elements can occur anywhere in a tuple – not just at the end!

type Strings = [string, string];
type Numbers = [number, number];

// [string, string, number, number]
type StrStrNumNum = [...Strings, ...Numbers];

Previously, TypeScript would issue an error like the following.

A rest element must be last in a tuple type.

But now the language can flatten spreads at any position.

When we spread in a type without a known length, the resulting type becomes unbounded as well, and all consecutive elements factor into the resulting rest element type.

type Strings = [string, string];
type Numbers = number[]

// [string, string, ...Array<number | boolean>]
type Unbounded = [...Strings, ...Numbers, boolean];

By combining both of these behaviors together, we can write a single well-typed signature for concat:

type Arr = readonly any[];

function concat<T extends Arr, U extends Arr>(arr1: T, arr2: U): [...T, ...U] {
    return [...arr1, ...arr2];
}

While that one signature is still a bit lengthy, it’s still only one signature, it only has to be written once, and it actually gives predictable behavior on all arrays and tuples.

This functionality on its own is great, but there are other more sophisticated scenarios too. For example, consider a function to partially apply arguments called partialCall. partialCall takes a function along with the initial few arguments that that function expects. It then returns a new function that takes any other arguments the function needs, and calls them together.

function partialCall(f, ...headArgs) {
    return (...tailArgs) => f(...headArgs, ...tailArgs)
}

TypeScript 4.0 improves the inference process for rest parameters and rest tuple elements so that we can type this and have it “just work”.

type Arr = readonly unknown[];

function partialCall<T extends Arr, U extends Arr, R>(f: (...args: [...T, ...U]) => R, ...headArgs: T) {
    return (...b: U) => f(...headArgs, ...b)
}

In this case, partialCall understands which parameters it can and can’t initially take, and returns functions that appropriately accept and reject anything left over.

const foo = (x: string, y: number, z: boolean) => {}

// This doesn't work because we're feeding in the wrong type for 'x'.
const f1 = partialCall(foo, 100);
//                          ~~~
// error! Argument of type 'number' is not assignable to parameter of type 'string'.


// This doesn't work because we're passing in too many arguments.
const f2 = partialCall(foo, "hello", 100, true, "oops")
//                                              ~~~~~~
// error! Expected 4 arguments, but got 5.


// This works! It has the type '(y: number, z: boolean) => void'
const f3 = partialCall(foo, "hello");

// What can we do with f3 now?

f3(123, true); // works!

f3();
// error! Expected 2 arguments, but got 0.

f3(123, "hello");
//      ~~~~~~~
// error! Argument of type '"hello"' is not assignable to parameter of type 'boolean'.

Variadic tuple types enable a lot of new exciting patterns, especially around function composition. We expect we may be able to leverage it to do a better job type-checking JavaScript’s built-in bind method. A handful of other inference improvements and patterns also went into this, and if you’re interested in learning more, you can take a look at the pull request for variadic tuples.

Labeled Tuple Elements

Improving the experience around tuple types and parameter lists is important because it allows us to get strongly typed validation around common JavaScript idioms – really just slicing and dicing argument lists and passing them to other functions. The idea that we can use tuple types for rest parameters is one place where this is crucial.

For example, the following function that uses a tuple type as a rest parameter…

function foo(...args: [string, number]): void {
    // ...
}

…should appear no different from the following function…

function foo(arg0: string, arg1: number): void {
    // ...
}

…for any caller of foo.

foo("hello", 42); // works

foo("hello", 42, true); // error
foo("hello"); // error

There is one place where the differences begin to become observable though: readability. In the first example, we have no parameter names for the first and second elements. While these have no impact on type-checking, the lack of labels on tuple positions can make them harder to use – harder to communicate our intent.

That’s why in TypeScript 4.0, tuples types can now provide labels.

type Range = [start: number, end: number];

Further pushing the connection between parameter lists and tuple types, we’ve made the syntax for rest elements and optional elements mirror that of parameter lists.

type Foo = [first: number, second?: string, ...rest: any[]];

When labeling a tuple element, all other elements in the tuple must also be labeled.

type Bar = [first: string, number];
//                         ~~~~~~
// error! Tuple members must all have names or all not have names.

It’s worth noting – labels don’t require us to name our variables differently when destructuring. They’re purely there for documentation and tooling.

function foo(x: [first: string, second: number]) {
    // ...

    // note: we didn't need to name these 'first' and 'second'
    let [a, b] = x;

    // ...
}

On the whole, labeled tuples are handy when taking advantage of patterns around tuples and argument lists, along with implementing overloads in a type-safe way. To learn more, check out the pull request for labeled tuple elements.

Class Property Inference from Constructors

TypeScript 4.0 can now use control flow analysis to determine the types of properties in classes when noImplicitAny is enabled.

class Square {
    // Previously: implicit any!
    // Now: inferred to `number`!
    area;
    sideLength;

    constructor(sideLength: number) {
        this.sideLength = sideLength;
        this.area = sideLength ** 2;
    }
}

In cases where not all paths of a constructor assign to an instance member, the property is considered to potentially be undefined.

class Square {
    sideLength;

    constructor(sideLength: number) {
        if (Math.random()) {
            this.sideLength = sideLength;
        }
    }

    get area() {
        return this.sideLength ** 2;
        //     ~~~~~~~~~~~~~~~
        // error! Object is possibly 'undefined'.
    }
}

In cases where you know better (e.g. you have an initialize method of some sort), you’ll need an explicit type annotation along with a definite assignment assertion (!) if you’re in strictPropertyInitialization.

class Square {
    // definite assignment assertion
    //        v
    sideLength!: number;
    //         ^^^^^^^^
    // type annotation

    constructor(sideLength: number) {
        this.initialize(sideLength)
    }

    initialize(sideLength: number) {
        this.sideLength = sideLength;
    }

    get area() {
        return this.sideLength ** 2;
    }
}

Short-Circuiting Assignment Operators

JavaScript, and a lot of other languages, support a set of operators called compound assignment operators. Compound assignment operators apply an operator to two arguments and then assign the result to the left side. You may have seen these before:

// Addition
// a = a + b
a += b;

// Subtraction
// a = a - b
a -= b;

// Multiplication
// a = a * b
a *= b;

// Division
// a = a / b
a /= b;

// Exponentiation
// a = a ** b
a **= b;

// Left Bit Shift
// a = a << b
a <<= b;

So many operators in JavaScript have a corresponding assignment operator! But there are three notable exceptions: logical and (&&), logical or (||), and nullish coalescing (??).

That’s why TypeScript 4.0 supports a promising proposal to add three new assignment operators: &&=, ||=, and ??=.

These operators are great for substituting any example where a user might write code like the following:

a = a && b;
a = a || b;
a = a ?? b;

There are even some patterns we’ve seen (or, uh, written ourselves) to lazily initialize values when they’ll be needed.

let values: string[];

// Before
(values ?? (values = [])).push("hello");

// After
(values ??= []).push("hello");

(look, we’re not proud of all the code we write…)

On the rare case that you use getters or setters with side-effects, it’s worth noting that these operators only perform assignments if necessary. In that sense, the assignment is short-circuited, which is the only way they differ from other compound assignments.

a ||= b;

// actually equivalent to

a || (a = b);

We’d like to extend a big thanks to community member Wenlu Wang for this contribution!

For more details, you can take a look at the pull request here. You can also check out TC39’s proposal repository for this feature.

unknown on catch Clause Bindings

Since the beginning days of TypeScript, catch clause variables were always typed as any. This meant that TypeScript allowed you to do anything you wanted with them.

try {
    // ...
}
catch (x) {
    // x has type 'any' - have fun!
    console.log(x.message);
    console.log(x.toUpperCase());
    x++;
    x.yadda.yadda.yadda();
}

The above has some undesirable behavior if we’re trying to prevent more errors from accidentally happening in our error-handling code! Because these variables have the type any by default, they lack any type-safety which could prevent invalid operations.

That’s why TypeScript 4.0 now lets you specify the type of catch clause variables as unknown instead. unknown is safer than any because it reminds us that we need to perform some sorts of type-checks before operating on our values.

try {
    // ...
}
catch (e: unknown) {
    // error!
    // Property 'toUpperCase' does not exist on type 'unknown'.
    console.log(e.toUpperCase());

    if (typeof e === "string") {
        // works!
        // We've narrowed 'e' down to the type 'string'.
        console.log(e.toUpperCase());
    }
}

While the types of catch variables won’t change by default, we might consider a new --strict mode flag in the future so that users can opt in to this behavior. In the meantime, it should be possible to write a lint rule to force catch variables to have an explicit annotation of either : any or : unknown.

For more details you can peek at the changes for this feature.

Custom JSX Factories

When using JSX, a fragment is a type of JSX element that allows us to return multiple child elements. When we first implemented fragments in TypeScript, we didn’t have a great idea about how other libraries would utilize them. Nowadays most other libraries that encourage using JSX and support fragments have a similar API shape.

In TypeScript 4.0, users can customize the fragment factory through the new jsxFragmentFactory option.

As an example, the following tsconfig.json file tells TypeScript to transform JSX in a way compatible with React, but switches each invocation to h instead of React.createElement, and uses Fragment instead of React.Fragment.

{
  "compilerOptions": {
    "target": "esnext",
    "module": "commonjs",
    "jsx": "react",
    "jsxFactory": "h",
    "jsxFragmentFactory": "Fragment"
  }
}

In cases where you need to have a different JSX factory on a per-file basis, you can take advantage of the new /** @jsxFrag */ pragma comment. For example, the following…

// Note: these pragma comments need to be written
// with a JSDoc-style multiline syntax to take effect.
/** @jsx h */
/** @jsxFrag Fragment */

import { h, Fragment } from "preact";

let stuff = <>
    <div>Hello</div>
</>;

…will get transformed to this output JavaScript…

// Note: these pragma comments need to be written
// with a JSDoc-style multiline syntax to take effect.
/** @jsx h */
/** @jsxFrag Fragment */
import { h, Fragment } from "preact";
let stuff = h(Fragment, null,
    h("div", null, "Hello"));

We’d like to extend a big thanks to community member Noj Vek for sending this pull request and patiently working with our team on it.

You can see that the pull request for more details!

Speed Improvements in build mode with --noEmitOnError

Previously, compiling a program after a previous compile with errors under --incremental would be extremely slow when using the --noEmitOnError flag. This is because none of the information from the last compilation would be cached in a .tsbuildinfo file based on the --noEmitOnError flag.

TypeScript 4.0 changes this which gives a great speed boost in these scenarios, and in turn improves --build mode scenarios (which imply both --incremental and --noEmitOnError).

For details, read up more on the pull request.

--incremental with --noEmit

TypeScript 4.0 allows us to use the --noEmit flag when while still leveraging --incremental compiles. This was previously not allowed, as --incremental needs to emit a .tsbuildinfo files; however, the use-case to enable faster incremental builds is important enough to enable for all users.

For more details, you can see the implementing pull request.

Editor Improvements

The TypeScript compiler doesn’t only power the editing experience for TypeScript itself in most major editors – it also powers the JavaScript experience in the Visual Studio family of editors and more. For that reason, much of our work focuses on improving editor scenarios – the place you spend most of your time as a developer.

Using new TypeScript/JavaScript functionality in your editor will differ depending on your editor, but

You can check out a partial list of editors that have support for TypeScript to learn more about whether your favorite editor has support to use new versions.

/** @deprecated */ Support

TypeScript’s editing support now recognizes when a declaration has been marked with a /** @deprecated * JSDoc comment. That information is surfaced in completion lists and as a suggestion diagnostic that editors can handle specially. In an editor like VS Code, deprecated values are typically displayed a strike-though style like this.

Some examples of deprecated declarations with strikethrough text in the editor

This new functionality is available thanks to Wenlu Wang. See the pull request for more details.

Partial Editing Mode at Startup

One specific piece of feedback we’ve heard from users has been slow startup times, especially on bigger projects. Specifically, the culprit is usually a process called project loading, which is roughly the same as the program construction step of our compiler. This is the process of starting with an initial set of files, parsing them, resolving their dependencies, parsing those dependencies, resolving those dependencies’ dependencies, and so on. It ends up taking quite a bit of time. The the bigger your project is, the worse the startup delays you might experience before you can get basic editor operations like go-to-definition, code completions, and quick info.

That’s why we’ve been working on a new mode for editors to provide a partial experience until the full language service experience has loaded up. The core idea is that editors can run a lightweight partial server that only has a single-file view of the world. This has always been an option for editors, but TypeScript 4.0 expands the functionality to semantic operations (as opposed to just syntactic operations). While that means the server has limited information (so not every operation will be totally complete) – this is often good enough for some basic code completion, quick info, signature help, and go-to-definition when you first open up your editor.

While it’s hard to pin down precisely what sorts of improvements you’ll see based on hardware, operating system, and project size; but today we’ve seen machines take anywhere between 20 seconds to a minute until TypeScript is responsive on a file in the Visual Studio Code codebase. In contrast, this new mode seems to bring the time until TypeScript is interactive on that codebase down to anywhere between 2-5 seconds.

Currently the only editor that supports this mode is Visual Studio Code Insiders, and you can try it out by following these steps.

  1. installing Visual Studio Code Insiders
  2. configuring Visual Studio Code Insiders to use the beta, or installing the JavaScript and TypeScript Nightly Extension for Visual Studio Code Insiders.
  3. opening your JSON settings view: > Preferences: Open Settings (JSON)
  4. adding the following lines:
    // The editor will say 'dynamic' is an unknown option,
    // but don't worry about it for now. It's still experimental.
    "typescript.tsserver.useSeparateSyntaxServer": "dynamic",
    

There’s still room for improvement in UX and functionality – both from the editor side and the language support side. For example, while partial editing support is already loaded and working, you’ll still see Initializing JS/TS language features in your status bar. You can ignore that since operations will still be powered by that partial mode. We also have a list of improvements in the works, and we’re looking for more feedback on what you think might be useful.

For more information, you can see the original proposal, the implementing pull request, along with the follow-up meta issue.

Smarter Auto-Imports

Auto-import is a fantastic feature that makes coding a lot easier; however, every time auto-import doesn’t seem to work, it throws us off a lot and can ruin our productivity. One specific issue that we heard from users was that auto-imports wouldn’t work on packages that were written in TypeScript – that is, until they wrote at least one explicit import somewhere else in their project.

Now, that sounds pretty weird and oddly specific. Why would auto-imports work for @types packages, but not for packages that ship their own types? It turns out that auto-imports are powered by checking which packages your project already includes. TypeScript has a quirk to make some scenarios work better by automatically including all packages in node_modules/@types, but not other packages – the rationale being crawling through all your node_modules packages might be expensive.

All of this leads to a pretty lousy getting started experience for when you’re trying to auto-import something that you’ve just installed but haven’t used yet.

TypeScript 4.0 now does a little extra work in editor scenarios to include any packages you’ve listed in your package.json‘s dependencies field. The information from these packages is only used to improve auto-imports, and doesn’t change anything else like type-checking. This helps alleviate the cost of walking through your node_modules directories while still fixing one of the most common problems we’ve heard for new projects.

For more details, you can see the proposal issue along with the implementing pull request.

Breaking Changes

lib.d.ts Changes

Our lib.d.ts declarations have changed – most specifically, types for the DOM have changed. The most notable change may be the removal of document.origin which only worked in old versions of IE and Safari MDN recommends moving to self.origin.

Properties Overridding Accessors (and vice versa) is an Error

Previously, it was only an error for properties to override accessors, or accessors to override properties, when using useDefineForClassFields; however, TypeScript now always issues an error when declaring a property in a derived class that would override a getter or setter in the base class.

class Base {
    get foo() {
        return 100;
    }
    set foo() {
        // ...
    }
}

class Derived extends Base {
    foo = 10;
//  ~~~
// error!
// 'foo' is defined as an accessor in class 'Base',
// but is overridden here in 'Derived' as an instance property.
}
class Base {
    prop = 10;
}

class Derived extends Base {
    get prop() {
    //  ~~~~
    // error!
    // 'prop' is defined as a property in class 'Base', but is overridden here in 'Derived' as an accessor.
        return 100;
    }
}

See more details on the implementing pull request.

Operands for delete must be optional.

When using the delete operator in strictNullChecks, the operand must now be any, unknown, never, or be optional (in that it contains undefined in the type). Otherwise, use of the delete operator is an error.

interface Thing {
    prop: string;
}

function f(x: Thing) {
    delete x.prop;
    //     ~~~~~~
    // error! The operand of a 'delete' operator must be optional.
}

See more details on the implementing pull request.

Usage of TypeScript’s Node Factory is Deprecated

Today TypeScript provides a set of “factory” functions for producing AST Nodes; however, TypeScript 4.0 provides a new node factory API. As a result, for TypeScript 4.0 we’ve made the decision to deprecate these older functions in favor of the new ones.

For more details, read up on the relevant pull request for this change.

What’s Next?

As with all of our beta releases, we’re looking for users to try things out, upgrade a few projects, stress test the release, and give us your feedback! We want to make sure TypeScript 4.0 really hits the mark and makes migration easy, so try it out and give us your feedback!

Happy Hacking!

– Daniel Rosenwasser and the TypeScript Team

9 comments

Discussion is closed. Login to edit/delete existing comments.

  • Olzhas Alexandrov 0

    Not conforming to semantic versioning, which is a common standard in JS community is disappointing and leaves doubts about Microsoft products overall as you allow yourself introducing unnecessary complexity and misleading users.

    • Chayim Refael Friedman 0

      I’ve never understood the concept of semantic versioning with programming language, since there aren’t bug fixes. But maybe it would make sense to not use a dot at all.

      • Simon Weaver 0

        Let’s just go straight to Typescript X!

    • Max Davidov 0

      Semantic versioning for a language doesn’t make much sense. Every change is breaking someone’s code. But I agree that current versioning scheme creates false impression that Typesrcipt is following Semver.

    • Laurențiu T. 0

      Semantic version has a big flaw, and that flaw is it’s completely ignorant of user perception as well as usability concerns, such as easy to remember critical version names or simply very different and important capabilities of an otherwise fully backwards compatible version.

      The more backwards compatible you are, the more semver shoots you in the foot. There is effectively no mechanism for the project team to say “THIS particular version is a BIG milestone in our development and will totally change the way you work, even though we didn’t break anything that currently worked and you can still use the old (now very bad practice) way of doing things”. As far as semantic version is concerned you went from 3.9.x to 3.10.x and chances are all the breakthroughs you’ve made with maybe a year (or more) of effort put into it is, as far as the regular user is concerned, is non-existent.

      So until semantic versioning fixes itself (which in and of itself may require breaking the entire standard), the rule of thumb everyone uses is “forcefully bump the major version” and/OR give it a name too (typically only when at least several years have passed between said versions, not every Wednesday). For example the Ember community decided to give version 3.15 the name “Octane” for the same reasons. Having a ‘pointy’ name also helps people talk about it, it’s much easier to say typescript 4 rather then typescript 3.11.56 for example. Are any of these ideal? No. Do they solve the problem? Yes.

      Other examples I know of would be the linux kernel, just so people would not percieve the current state as being “the same as it was 20 years ago.”

      This is not the first or last time a standard underestimated the human factor. The world’s medical standard for names has had to literally write it down NOT to give viruses localities as names, just to avoid the perception of said region/product/culture/people becoming forever tainted/associated with said name (eg. Corona beer; hence why its called “Covid” not “corona virus”)

  • Tobias Lundin 0

    As usual a fantastic writeup of all the goodness to come, can’t wait!

    However, it took me a while to wrap my head around the example of how “we can type function like tail, without our “death by a thousand overloads” issue.” until I realized there is a mistake there and the resulting type should be:

    // type [2, 3, 4, ...string[]]
    const r2 = tail([...myTuple, ...myArray] as const);
    
  • MgSam 0

    It would be helpful when you guys introduce features that are used specifically by a few libraries, that you explicitly mention which libraries those are / what other use cases they might have.

    For example, I don’t follow why you would want to make your parameter list a spread on a tuple type rather than explicitly listing out the members. What is the benefit there? What libraries actually do this?

    More generally, what is the benefit to using “tuple types” over “real” types in JS/TS? Unlike in many other languages, it is trivial to create a new composite type in JS/TS, so I don’t really understand why people would choose to use this pattern.

    • Warren R 0

      In TypeScript, Tuples model the use of arrays to capture a set of values. When serializing to JSON for transmission across the wire, it’s significantly more space-efficient than an object.

      Consider an API call that returns a list of 5,000 X/Y coordinates. You could write it as

      [5,-5],[9,3],[0,2], ...

      or you could write it as:

      {x:5,y:-5},{x:9,y:3},{x:0,y:2}, ...

      Neither you as a programmer, or the user consuming this API, benefit from repeating identical property names thousands of times, so array serialization is preferable. This new TS 4.0 feature of applying labels to tuple elements solves the biggest ergonomic problem of Tuples, while also avoiding the need to have those names used at runtime.

      • Kieran osgood 0

        Just logged in to say thanks, this reply was helpful for me 🙂

Feedback usabilla icon