Example of Node-Influx

Express Response Times Example

In this example we’ll create a server which has an index page that prints out ‘hello world’, and a page

http://localhost:3000/times which prints out the last ten response times that influxDB gave us.

The end result should look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
curl -s localhost:3000
Hello world

curl -s localhost:3000/times | jq
[
{
"time": "2016-10-09T19:13:26.815Z",
"duration": 205,
"host": "ares.peet.io",
"path": "/"
}
]

Get started by installing and importing everything we need. This example requires Node 6.

1
npm install influx express

Now create a new file app.js and start writing

1
2
3
4
5
6
const Influx = require('../../')
const express = require('express')
const http = require('http')
const os = require('os')

const app = express()

Create a new influx client. We tell it to use the express_response_db database by default, and give it some information about the schema we’re writing. It can use this to be smarter about what data formats it writes and do some basic validation for us.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const influx = new Influx.InfluxBD({
host: 'localhost',
database: 'express_response_db',
schema: [
{
measurement: 'response_time',
fields: {
path: Influx.FieldType.STRING,
duration: Influx.FieldType.INTEGER
},
tags: [
'host'
]
}
]
})

Now we have a working influx client!

We’ll make sure the database exists and boot the app.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
influx.getDatabaseName()
.then(names => {
if (!names.includes('express_response_db')) {
return influx.createDatabase('express_response_db')
}
})
.then(() => {
http.createServer(app).listen(3000, () => {
console.log('Listening on port 3000')
})
})
.catch(err => {
console.error('Error creating Influx database')
})

Finally we’ll define the middleware and routes we’ll use. We have a generic middleware that records the time between when requests comes in, and the time we response to them. We also have another route called /times which prints out the last ten timings we recorded.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
app.use((req, res, next) => {
const start = Date.now()
res.on('finish', () => {
const duration = Date.now() - start
console.log(`Request to ${req.path} took ${duration} ms`)
influx.writePoints([
{
measurement: 'response_times',
tags: { host: os.hostname() },
fields: { duration, path: req.path }
}
]).catch(err => {
console.log(`Error saving data to InfluxDB: ${err.stack}`)
})
})
return next()
})

app.get('/', (req, res) => {
setTimeout(() => res.end('Hello world'), Math.random() * 500)
})

app.get('/times', (req, res) => {
influx.query(`
select * from response_times
where host = ${Influx.escape.stringLit(os.hostname())}
order by time desc
limit 10
`).then(result => {
res.json(result)
}).catch(err => {
res.status(500).send(err.stack)
})
})

Snippets of Koa Middlewares

koa-compress

Compress middleware for Koa

Example

1
2
3
4
5
6
7
8
9
const compress = require('koa-compress')
const Koa = require('koa')

const app = new Koa()
app.use(compress({
filter: (content_type) => /text/i.test(content_type),
threshold: 2048,
flush: require('zlib').Z_SYNC_FLUSH,
}))

Options

The options are passed to zlib

filter: An optional function that checks the response content type to decide whether to compress. By default, it uses compressible

threshold: Minimum response size in bytes to compress. Default 1024 byte or 1kb.

koa-morgan

HTTP Request logger middleware for node.js

1
const morgan = require('morgan')

morgan(format, options)

Create a new morgan logger middleware function using the given format and options.

The format argument amay be a string of a predefined name, a string fo a format string, or a function that will produce a log entry.

The format function will be called with three arguments tokens, req and res, where tokens is object with all tokens, req is the HTTP request and res is the HTTP response. The function is expected to return a string that will be the log line, or undefined/null to skip logging.

predefined format string

combined: Standard Apache combined log output

1
:remote-addr - :remote-user [:date[clf]] ":method :url HTTP/:http-version" :status :res[content-length] ":referrer" ":user-agent"

common: Standard Apache common log output

dev

short

tiny

1
:method :url :status :res[content-length] - :response-time ms

write logs to a file

Single file

Simple app that will log all request in the Apache combined format to the file access.log

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const fs = require('fs')
const Koa = require('koa')
const morgan = require('koa-morgan')

const accessLogStream = fs.createWriteStream(__dirname + '/access.log', { flags: 'a' })

const app = new Koa()

// setup the logger

app.use(morgan('combined', { stream: accessLogStream }))

app.use((ctx) => {
ctx.body = 'hello world'
})

app.listen(3000)

koa-session

Simple session middleware for koa. Default is cookie-based session and support external store.

Require Node 7.6 or greater for asycn/await support

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
const session = require('koa-session')
const Koa = require('koa')
const app = new Koa()

app.keys = ['some secret hurr']

const CONFIG = {
key: 'koa:sess', // (string) cookie key, default is koa:sess
maxAge: 86400000, // (number) in ms, default is 1 day,
overWrite: true, // (boolean) can overwrite or not, default true
httpOnly: true, // (boolean) httpOnly or not, default true
signed: true, // (boolean) signed or not, default true
rolling: false, // (boolean) Force a session identifier cookie to be set on every response. The expiration is reset to the original maxAge.
}

app.use(session(CONFIG, app)) // or if you prefer to default config, you can use => app.use(session(app))

app.use(ctx => {
if (ctx.path === '/favicon.ico') return
let n = ctx.session.views || 0
ctx.session.views = ++n
ctx.body = n + ' views'
})

app.listen(3000)
console.log('listening on port 3000')

Options

The cookie name is controlled by the key option, which default to ‘koa:sess’. All other options are passed to ctx.cookies.get() and ctx.cookies.set() allowing you to control security, domain, path and signing among other setting.

Git Merge and Rebase

Conceptual Overview

The first thing to understand about git rebase is that it solves the same problem as git merge. Both of these commands are designed to integrate changes from one branch into another branch – they just do it in very different ways.

Consider what happens when you start working on a new feature in a dedicated branch, then another team member updates the master branch with new commits. The results in a forked history which should be familiar to anyone who has used Git as a collaboration tool.

Now, let’s say that the new commits in master are relevant to the feature that you’re working on. To incorporate the new commits into your feature branch, you have two options: merginor rebasing.

The Merge Option

The easiest option is to merge the master branch into the feature branch using something like the following:

1
2
git checkout feature
git merge master

Or you can condense this to a one-liner:

1
git merge master feature

This creates a new merge commit in the feature branch that ties together the histories of the both branches, giving you a branch structure that looks like this:

Merging is nice because it’s a non-destructive operation. The existing branches are not changed in any way. This avoids all of the potential pitfalls of rebasing.

On the other hand, this also means that the feature branch will have an extraneous merge commit every time you need to incorporate upstream changes. If master is very active, this can pollute your feature branch history quite a bit. While it’s possible to migrate this issue with advanced git log options, it can make it hard to other developers to understand the history of the project.

The Rebase Option

As an alternative to merge, you can rebase the feature branch onto master branch using the following commands:

1
2
git checkout feature
git rebase master

This moves the entire feature branch to begin on the top of the master branch, effectively incorporating all of the new commits in master. But, instead of using a merge commit, rebasing re-writes the project history by creating brand new commits for each commit in the original branch.

The major benefit of rebasing is that you get a much cleaner project history. First, it eliminates the unnecessary merge commits required by git merge. Second, as you can see in the above diagram, rebasing also results in a perfectly linear project history – you can follow the top of feature all the way to the beginning of the project without any forks. This makes it easier to navigate your project with commands like git log.

Interactive Rebasing

Interactive rebasing gives you the opportunity to alter commits as they are moved to the new branch. This is even more powerful than an automated rebase, since it offers complete control over the branch’s commit history. Typically, this is used to clean up a messy history before merging a feature branch into master.

To begin an interactive rebasing session, pass the -i option to the git rebase command.

1
2
git checkout feature
git rebase -i master

This will open a text editor listing all of the commits that are about to be moved:

1
2
3
pick 33d5b7a Message for commit #1
pick 9480b3d Message for commit #2
pick 5c67e61 Message for commit #3

This listing defines exactly what the branch will look like after the rebase is performed. By changing the pick command and/or reordering the entries, you can make the branch’s history look like whatever you want.

For example, if the 2nd commit fixes a small problem in the 1st commit, you can condense them into a single comit with the fixup command:

1
2
3
pick 33d5b7a Message for commit #1
fixup 9480b3d Message for commit #2
pick 5c67e61 Message for commit #3

When you save and close the file, Git will perform the rebase according to your instructions, resulting in project history that looks like the following:

Eliminating insignificant commits like this makes your feature’s history much easier to understand. This is something that git merge simply cannot do.

The Golden Rule of Rebasing

The Golden Rule of git rebase is to never use it on public branches.

For example, think about what would happen if you rebased master onto your feature branch.

The rebase moves all of the commits in master onto the tip of feature. The problem is that only happened in your repo. All of the other developers are still working with the origin master. Since rebasing results in brand new commits, Git will think that your master branch’s history has diverged from everybody else’s.

The only way to synchronize the two master branches is to merge them back together, resulting in an extra merge commit and two sets of commits that contain the same changes.

Type Compatibility in TypeScript

Type Compaibility in TypeScript is based on Structural Subtyping. Structural Typing is a way of relating types based solely on their members. This is in contrast with nominal typing. Consider the following code:

1
2
3
4
5
6
7
8
9
10
11
interface Named {
name: string
}

class Person {
name: string
}

let p: Named
// OK, because of structural typing
p = new Person()

In nominally-typed language like C# or Java, the equivalnet code would be error because the Person class does not explicitly describe itself as being an implementor of the Named interface.

TypeScript’s structural type system was designed based on how JavaScript code is typically written. Because JavaScript widely uses anonymous objects like function expressions and object literals, it’s much more natural to represent the kinds of relationship found in JavaScript libraries with a structural type system instead of a nominal one.

The basic rule for TypeScript’s structural type system is the x is compatible with y if y has at least the same members as x.

1
2
3
4
5
6
7
8
interface Named {
name: string;
}

let x: Named;
// y's inferred type is { name: string, location: string }
let y = { name: 'Alice', location: 'Seattle' }
x = y

To check whether y can be assigned to x, the compiler checks each property of x to find a corresponding compatible property in y. In this case, y must have a member called name that is a string, so the assignment is allowed.

The same rule for assignment is used when checking function call arguments

The Type Compatibility in Variable Assignment and Function Arguments.

Comparing Two Functions

1
2
3
4
5
let x = (a: number) => 0
let y = (b: number, s: string) => 0

y = x // OK
x = y // Error

The check if x is assignable to y, we first look at the parameter list. Each parameter in x must have a corresponding parameter in y with a compabile type.

So x is assignable to y, but y is not assignable to x

Now let’s look at how return types are treated, using two functions that differs only by their return type:

1
2
3
4
5
let x = () => ({ name: 'Alice' })
let y = () => ({ name: 'Alice', location: 'Seattle' })

x = y // OK
y = x // Error

The type system enforces that the source function’s return type be a subtype of the target type’s return type.

Mannual of Ownership in Rust

Ownership is Rust’s most unique feature, and it enables Rust to make memory safety guarantees without needing a garbage collector. Therefore, it’s important to understand how ownership works in Rust. In this chapter we’ll talk about ownership as well as several related features: borrowing, slices, and how Rust lays data out in memory.

What is Ownership

Rust’s central feature is ownership, although the feature is staightforward to explain, it has deep implications of the rest of the language.

All programs have to manage the way they use a computer’s memory while running. Some languages ahve garbage collecion that constantly looks for no longer used memory as the program runs. In other languages, the programmer must explicitly allocate and free the memory. Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks at compile time. No run-time costs are iccured for any of the ownership features.

Because ownership is a new concept for many programmers, it does take some time to get used to. The good news is that the more experienced you become with Rust and the rules of the ownership system, the more you’ll be able to naturally develop code that is safe and efficient.

Ownership Rules

First, let’s take a look at the ownership rules.

  1. Each Value in Rust has a variable that’s called its owner.
  1. There can only be one owner at a time.
  1. When the owner goes out of scope, the value will be dropped.

Variable Scope

A scope is the range within a program for which an item is valid.

1
2
3
fn main () {
let s = "hello";
}

The variable s refers to a string literal, where the value of the string is hardcoded into the text of our program. The variable is valid from the point at which it’s declared until the end of the current scope.

1
2
3
{                       // s is not valid here
let s = "hello"; // s is valid here
} // the scope is over, and s is no longer valid.
  • When s comes into scope it is valid

  • It remains so until it goes out of scope

At this point, the relationship between scopes and the valid variables are valid is similar to other programming languages. Now we’ll build on top of this understanding by introducing the String type.

The String Type

To illustrate the rules of ownership, we need a data type that is complex.

We’ll use String as the example here and concentrate on the parts of String that relate to ownership. These aspects also apply to other complex data types provided by the standard library and that you create.

We’ve already seen string literals, where a string value is hardcoded into our program. String literals are conveient, but they aren’t always suitable for every situation in which you want to use text. One reason is that they’re immutable. Another is that not every string value can be known when we write our code. For these situations, Rust has a second string type, String. This type is allocated on the heap and as such is able to store an amount of text that is unknown to us at compile time. You can create a String from a string literal using the from function, like so:

1
let s = String::from("hello")

The double colon(::) is an operator that allows us to namespace this particular from function under the String type rather than using some sort of name like string_from.

This kind of string can be mutated:

1
2
3
let mut s = String::from('Hello')
s.push_str(", world!")
println!("{}", s);

So why can String be mutated but literals cannot.

Memory and Allocation

In the case of a string literal, we know the contents at compile so the text is hardcoded directly in to the final executable, making string literals fast and efficient. But these properties only come from its immutability. Unfortunately, we can’t put a blob of memory into the binary for each piece of text whose size is unknown at compile time and whose size might change while running the program.

With the String type, in order to support a mutable, growable piece of text, we need to allocate an amount of memory on the heap, unknown at compile time, to hold the contents. This means:

  • The memory must be requested from the operating system at runtime.

  • We need a way of returning this memory to the operating system when we’re done with our String

That first part is done by us: when we call String::from, its implementation requests the memory if needs. This is pretty much universal in programming languages.

However, the second part is different. In languages with a garbage collector(GC), the GC keeps track and cleans up memory that isn’t being used anymore, and we, as the programmer, don’t need to think about it. Without a GC, it’s the programmer’s responsibility to identify when memory is no longer being used and call code to explicitly return it, just as we did to request it. Doing this correctly has historically been a difficult programming problem. If we forget, we’ll waste memory. If we do it too early, we’ll have an invalid vairable. If we do it twice, that’s a bug too. We need to pair exactly one allocate with exact one free

Rust takes a different path: the memory is automatically returned once the variable that owns it goes out of scope.

1
2
3
4
{
let s = String::from("Helloe"); // s is valid from this point

} // the scope is over

These is a natural point at which we can return the memory our String needs to operating system: when s goes out of scope. When a variables goes out of scope, Rust calls a special function for us. This function is called drop, and it’s where the author of String can put the code to return the memory. Rust calls drop automatically at the closing }.

Note: In C++, this pattern of deallocating resources at the end of an item’s lifetime is sometimes called Resource Acquisition Is Initialization(RAII). The drop function in Rust will be familiar to you if you’ve used RAII patterns.

This patterns has profound impact on the way Rust code is written. It may seem simple right now, but the behavior of code can be unexpected in more complicated situations when we want to have multiple variables use the data we’ve allocated ont the heap.

Ways Variables and Data Interact: Move

Multiple variables can interact with the same data in different ways in Rust. Let’s look at an example using an integer.

1
2
let x = 5;
let y = x;

Here we got two independent variables x and y, this is because integers are simple values with a known, fixed size, and two 5 are pushed onto the stack.

Now let’s look at the String version.

1
2
let s1 = String::from("Hello");
let s2 = s1;

This looks very similar to the previous code, so we might assume that the way it works would be the same: that is, the second line would make a copy of the value in s1 and bind it to s2. But this isn’t quite what happens.

To explain this more thoroughly, let’s look at what String looks like under the covers in the Figure.

A String is made up of three parts, shown on the left: a pointer to the memory that holds the contents of the string, a length, and a capacity. This group of data is stored on the stack. On the right is the memory on the heap that holds the contents.

The length is how much memory, in bytes, the contents of the String is currently using. The capacity is the total amount of memory, in bytes, that the String has received from the operating system. The difference between length and capacity matters, but not in this context, so far now, it’s fine to ignore the capacity.

When we assign s1 to s2, the String data is copied, meaning we copy the pointer, the length and the capacity that are on the stack. We do not copy the data on the heap that the pointer refers to. In other words, the data representation in memory looks like the figure following.

Eariler, we said that when a variable goes out of scope, Rust automatically calls the drop function and cleans up the heap memory for the variable. But the fact is that both data pointers pointing to the same location. This is a problem: when s2 and s1 go out of scope, they will both try to free the same memory. This is known as a double free error. Freeing memory twice can lead to memory corruption, which can potentially lead to security vulnerabilities.

To ensure memory safety, there’s one more detail to what happens in this situation in Rust. Instead of trying to copy the allocated memory, Rust considers s1 to no longer be valid and therefore, Rust doesn’t need to free anything when s1 goes out of scope. Check out what happens when you try to use s1 after s2 is created.

1
2
3
4
let s1 = String::from("hello");
let s2 = s1;

println!("{}", s1);

You’ll get an error like this because Rust prevents you from using the invalidated reference:

1
2
3
4
5
6
7
8
9
10
error[E0382]: use of moved value: `s1`
--> src/main.rs:4:27
|
3 | let s2 = s1;
| -- value moved here
4 | println!("{}", s1);
| ^^ value used here after move
|
= note: move occurs because `s1` has type `std::string::String`,
which does not implement the `Copy` trait

This action in Rust, similar to shallow copy, it copy pointer, length and capacity in stack, but Rust also invalidates the first variable.

We call it move.

s1 has been invalidated.

That solves our problem, with only s2 valid, when it goes out of scope, it alone will freee the memory, and we’re done.

In addition, there’s a design choice that’s implied by this: Rust will never automatically create ‘deep’ copies of your data. Therefore, any automatic copying can be assumed to be inexpensive in terms of runtime performance.

Ways Variables and Data Interact: Clone

If we do want to deeply copy the heap data of the String, not just the stack data, we can use a common method called clone.

1
2
3
let s1 = String::from("Hello");
let s2 = s1.clone();
println!("s1 = {}, s2 = {}", s1, s2);

This works just fine, the heap data does get copied.

When you see a call to clone, you know that some arbitrary code is being executed and that code may be expensive. It’s a visual indicator that something different is going on.

Stack-Only Data: Copy

There’s another wrinkle we haven’t talked about yet. This code using integers.

1
2
3
4
let x = 5;
let y = x;

println!("x = {}, y = {}", x, y);

This code runs correctly because those types like integers that have a known size at compile time are stored entirely on the stack, so copied of the actual values are quick to make. That means there’s no reason we would want to prevent x from being valid after we created the variable y. In other words, there’s no difference between deep and shallow copying here, so calling clone wouldn’t do anything differently from the usual shallow copying and we can leave it out.

Rust has a special annotation called the Copy trait that we can place on types like integers that are stored on stack.

If a type has the Copy trait, an older variable is still usable after the assignment. Rust won’t let us annotate a type with the Copy trait if the type, or any of its parts, has implemented the Drop trait. If the type needs something special to happen when the value goes out of scope and we add the Copy annotation to that type, we’ll get a compile time error.

As a general rule, any group of simple scalar values can be Copy, and nothing that requires allocation or is some form of resource is Copy. Here are some of the types that are Copy:

  • All the integer types

  • The boolean type

  • All the floating point types

  • Tuples, but only if they contain types a also Copy.

Ownership and Functions

The semantics for passing a value to a function are similar to assigning a value to a variable. Passing a variable to a function will move or copy, just like assignment.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
fn main () {
let s = String::from("Hello"); // s comes into scope

take_ownership(s); // s's value moves into the function
// s is no longer valid here
let x = 5; // x comes into scope

make_copy(x); // x would move into the function, but i32 is Copy, so it's okay to still use x afterward.
}

fn take_ownership (some_string: String) { // some_string comes into scope
println!("{}", some_string);
} // here, some_string goes out of scope and `drop` is called. The backing memory is freed.

fn make_copy (some_integer: i32) { // some_integer comes into scope
println!("{}", some_integer);
} // Here some_integer goes out of scope.

If we tried to use s after the call to take_ownership, Rust would throw a compile time error. These static checks protect us from mistakes.

Return Values and Scope

Returning values can also transfer ownership.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
fn main () {
let s1 = give_ownership(); // give_ownership moves its return value into s1
let s2 = String::from("Hello"); // s2 comes into scope
let s2 = take_and_give_back(s2); // s2 is moved into take_and_give_back, which aslo move its returned value into s2
}

fn give_ownership () -> String {
let some_string = String::from("Hello"); // some_string comes into scope
some_string // some_string is returned and moves out to the calling function
}

fn take_and_give_back(a_string: String) -> { // a_string comes into scope
a_string // a_string is returned and moves out to the calling function
}

It’s possible to return mutiple values using a tuple.

Reference and Borrowing

Here is how you would define and use a calculate_length function that has a reference to an object as a parameter instead of taking ownership of the value.

1
2
3
4
5
6
7
8
9
10
11
fn main () {
let s1 = String::from("Hello");

let len = calculate_length(&s1);

println!("The length of '{}' is {}.", s1, len);
}

fn calculate_length(s: &String) - usize {
s.len()
}

First notice that all the tuple code in the variable declaration and the function return value is gone. Second, note that we pass &s1 into calculate_length, and in its definition, we take &String rather than String.

These ampersands(&) are reference, and they allow you to refer to some value without taking ownership of it.

In this figure, &String s pointing to String s1

Let’s take a closer look at the function call here:

1
2
let s1 = String::from("Hello")
let len = calculate_length(&s1)

The &s1 syntax lets us create a reference that refer to the value of s1, but does not own it.

Because it does not own it, the value it points to will not be dropped when the reference goes out of scope.

Likewise, the signature of the function uses & to indicate that the type of the paramter s is a reference.

1
2
3
fn calculate_length(s: &String) -> usize { // s is a reference to a String
s.len()
} // Here, s goes out of the scope. But because it does not have ownership of what it refers to, no drop happens.

The scope in which the variable s is valid is the same as any function parameter’s scope, but we don’t drop what the reference points to when it goes out of scope because we don’t have ownership.

Functions that have references as parameters instead of the actual values mean we won’t need to return the values in order to give back ownership, since we never had ownership.

We call having references as function paramters borrowing. As in real life, if a person owns something, you can borrow it from them, When you’re done, you have to give it back.

If we try to modify something we’re borrowing, it won’t work.

Mutable References

1
2
3
4
5
6
7
8
fn main () {
let mut s = String::from("Hello");
change(&mut s);
}

fn change(some_string: &mut String) {
some_string.push_str(', world')
}

First, we had to change s to be mut. Then we had to create mutable reference with &mut s and accept a mutable reference with some_string: &mut String.

But mutable references have one big restriction: you can only mutable reference to a particular piece of data in a particular scope.

This restriction allows for mutation but in a very controlled fashion. It’s something that new Rustaceans struggle with, because most languages let you mutate whenever you’d like. The benefit of having this restriction is that Rust can prevent data races at compile time.

A data race is a particular type of race condition in which these three behaviors occur:

  • Two or more pointers access the same data at the same time.

  • At least one of the pointers is being used to write to the data.

  • There’s no mechanism being used to synchrnize access to the data.

Data races cause undefined behavior and can be difficult to diagnose and fix when you’re trying to track them down at runtime. Rust prevents this problem from happening because it won’t even compile code with data race.

As always, we can use curly brackets to create a new scope, allowing for multiple mutable references, just not simultaneous ones:

1
2
3
4
5
let mut s = String::from("Hello");
{
let r1 = &mut s;
}
let r2 = &mut s;

A similar rule exists for combining mutable and immutable references. This code results in an error:

1
2
3
4
let mut s = String::from("Hello");
let r1 = &s; // no problem
let r2 = &s; // no problem
let r3 = &mut s; //BIG PROBLEM, mutable borrow occurs

We also cannot have mutable reference while we have an immutable one.

Users of an immutable reference don’t expect the values to suddently change out from under them.

Dangling References

In languages with pointers, it’s easy to erroneously to create a dangling pointer, a pointer that references a location in memory that may have been given to someone else, by freeing some memory while preserving a pointer to that memory. In Rust, by contrast, the compiler guarantees the references will never be dangling references: if we have a reference to some data, the compiler will ensure that the data will not go out of scope before reference to the data does.

The Rules of References

  • At any given time, you can have either but not both of

    • One mutable reference

    • Any number of immutable references

  • Referecences must always be valid.

Slices

Another data type that doesn’t have ownership is the slice. Slices let you reference a configuous sequence of elements in a collection rather tahn the whole collection.

String Slice

A string slice is a reference to part of a String, and looks like this:

1
2
3
4
let s = String::from("Hello world");

let hello = &s[0..5];
let world = &s[6..11];

This is similar to taking a reference to the whole String, but with the extra [0..5] but. Rather than a reference to the entire String, it’s a reference to an internal position in the String and the number of elements that it refers to.

We create slices with a range of [start_index..end_index], but the slice data structure acutally stores the starting position and the length of the slice.

So in the case of let world = &s[6..11];, world would be a slice that contains a pointer to the 6th byte of s and a length of value of 5.

With Rust’s .. range syntax, if you want to start at the first index(0), you can drop the value before the two periods. In other words

By the same token, if your slice includes the last byte of the String, you can drop the trailing number.

You can also drop both values to take a slice of the entire string.

1
2
3
4
5
6
7
8
9
10
fn first_word (s: &String) -> &str {
let bytes = s.as_bytes();

for (i, &item) in bytes.iter().enumerate() {
if item == b' ' {
return &s[0..i]
}
}
&s[..]
}

String Literals Are Slices

1
let s = "Hello, World";

The type of s here is &str: it’s a slice pointing to the specific point of the binary. This is also why string literals are immutable; &str is an immutable reference.

String Slices as Parameters

Knowing that you can take slices of literals and Strings leads us to one more improvement on first_word, and that’s its signature:

1
fn first_word(s: &String) -> &str {

If we have a string slice, we can pass that directly. If we have a String, we can pass a slice of the entire String. Defining a function to take a string slcie instead of a reference to a String makes our API more general and useful without losing any functionality.

Other Slices

String slices, as you might imagine, are specific to strings. But there’s a more general slice type, too.

1
2
let a = [1, 2, 3, 4, 5];
let slice = &a[1..3];

This slice has the type &[i32]. It works the same way as string slices do, by storing a reference to the first element and the length.

Basic Info of Web Worker

1
2
3
// index.js
var worker = new Worker ('worker.js')
worker.postMessage() // start the worker

postMessage 方法可以接受字符串或 JSON 对象

1
2
3
4
5
6
// index.js
worker.addEventListener('message', (e) => {
console.log('worker said: ', e.data)
}, false)

worker.postMessage('hello world')
1
2
3
4
// worker.js
self.addEventListener('message', (e) => {
self.postMessage(e.data)
}, false)

index.js 中调用 postMessage() 时, worker.js 通过 message 时间处理消息, index.js 的有效信息在 e.data 上.

index.js 和 worker.js 传递的消息是幅值而不是共享(序列化+反序列化)

Example

1
2
3
4
<button onclick="sayHi()">SayHi</button>
<button onclick="unknownCmd()">unknownCmd</button>
<button onclick="stop()"></button>
<output id="result"></output>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// index.js
function sayHi () {
worker.postMessage({
cmd: 'start',
msg: 'Hi',
})
}

function stop () {
worker.postMessage({
cmd: 'stop',
msg: 'Bye',
})
}

function unknownCmd () {
worker.postMessage({
cmd: 'foobar',
msg: '???',
})
}

var worker = new Worker('worker.js')

worker.addEventListener('message', function (e) {
document.getElementById('result').textContent = e.data
}, false)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// worker.js
self.addEventListener('message', function (e) {
var data = e.data
switch (data.cmd) {
case 'start': {
self.postMessage('WORKER STARTED: ' + data.msg)
break
}
case 'stop': {
self.postMessage('WORKER STOPPED: ' + data.msg + '. (buttons will no longer work)')
self.close()
break
}
default: {
self.postMessage('Unknown command: ' + data.msg)
}
}
}, false)

停止 worker 的方法两种

1
2
// index.js
worker.terminate()
1
2
// worker.js
self.close()

Worker 环境

Worker 作用域

在 worker.js 中, self 和 this 都是指 worker 的全局作用域

适用于 worker 的功能

由于web worker 的多线程行为, 所以他们只能使用 js 功能的子集

  • navigator 对象

  • location 对象

  • XMLHttpResponse

  • setTimeout() / clearTimeout() / setInterval() / clearInterval()

  • 应用缓存

  • 使用 importScripts() 方法导入外部脚本

  • 生成其他 worker

worker 无法使用:

  • DOM

  • window 对象

  • document 对象

  • parent 对象

加载外部脚本

worker 可以通过 importScripts() 函数将外部脚本文件或库加载到 worker 中, 该方法采用零个或多个字符串表示要导入的资源名.

1
2
3
4
5
// worker.js
importScripts('script1.js')
importScripts('script2.js')

importScripts('script3.js', 'script4.js')

添加子 worker

  • subWorker 必须托管在与福网页相同的来源中

  • subWorker 中的 URI 应相对于 worker 的位置来解析

处理错误

执行的 worker 发生错误时, 会触发 ErrorEvent, 包含三个属性: filename, lineno, message

Deep in Viewport

vw(viewport width) and vh(viewport height) are legnth units that represent exactly 1% of the size of any given viewport, regardless of its measurements. rem(short for root em) is also similar in its functionality, although it deals specifically with font sizes, and derives its name from the value fo the font size of the root element – which should default to 16 pixels on most browsers.

There are also a few more viewport units available for use, such as vmin and vmax which refer respectively to 1% of the viewport’s smaller dimension, and 1% of its larger dimension.

Interestingly enough, browsers actually calculate the entire browser window when it comes to width, meaning they factor the scrollbar into this dimension. Should you attempt to set the width of an element to a value of 100vw, it would force a horizontal bar to appear, since you’d be lightly stretching your viewport.

Device Pixels and CSS Pixels

Device Pixels are the kind of pixels we intuitively assume to be ‘right’. These pixels give the formal resolution of whichever device you’re working on, and can be read out from screen.width/height.

If you give a certain element a width: 128px, and your monitor is 1024px wide, and you maximize your browser screen, the element would fit out on your monitor eight times.

If the user zooms, however, this calculation is going to change. If the user zooms to 200%, your element with width: 128px will fit only four times on this 1024px wide monitor.

Zooming as implemented in modern browsers consists of nothing more than ‘stretching up’ pixels. That is, the width of the elemtn is not changed from 128 to 256px, instead the actual pixels are doubled in size. Formally, the element still has width of 128 CSS pixels, even though it happens to take the space of 256px.

In other words, zooming to 200% makes one css pixels grow to four times the size of one device pixels(two times the width, two times the height)

A few images will clarify the concept. Here are four pixels on 100% zoom level. Here CSS px fully overlap with device px.

Let’s zoom out. The CSS pixels start to shrink, meaning that one device px now overlaps several CSS px.

If you zoom in, the opposite happens. The CSS px start to grow, and now one CSS px overlap several device px.

The point is that you are only interested in CSS px. It’s those px that dictate how your style sheet is rendered.

Device px are almost entirely useless to you.

At zoom level 100% one CSS px is exactly equal to Device px.

Screen Size

screen.width and screen.height means the total width width and height of the user’s screen. These dimensions are measured in device px because they never change: they’re feature of the monitor and not of the browser.

Window Size

window.innerWidth and window.innerHeight

Window Size is measured in CSS px.

Scrolling Offset

window.pageXOffset and window.pageYOffset contains the horizontal and vertical scrolling offsets of the document. Thus you can find out how much the user has scrolled.

These properties measured in CSS px.

Viewport

The function of the viewport is to constrain the element, which is the uppermost containing block of your site.

Suppose you have a liquid layout and one of your sidebar has width: 10%. Now the sidebar neatly grows and shrinks as your resize the browser window. How does it work.

Technically, what happens is that the sidebar gets 10% of the width of its parent. Let’s say the . Normally all block-level element take 100% of the width of their parent ().

So your sidebar get width of 10% of browser.

In theory, the width of the element is restricted by the width of the viewport. The element takes 100% of the width of that viewport.

The viewport, in turn, is exactly equal to the browser window: it’s been defined as such. The viewport is not an HTML construct, so you cannot influence it by CSS. It just has the width and height of the browser window – on desktop. On mobile it’s quite a bit more complicated.

White width: 100% works fine at 100% zoom, if we zoomed in the viewport has become smaller than the total width of the site, the content will spills out of the element, but that element has overflow: visible, which means that the spilled-out content will be show in any case.

Measuring the viewport

The Viewport size can be found in document.documentElement.clientHeight and document.documentElement.clientWidth

If you know your DOM, you know that document.documentElement is in fact the <html> element: the root element of any HTML document. However, the viewport is one level higher, so to speak, it’s the element that contains the <html> element. That matters if you give the <html> element a width.

So document.documentElement.clientWidth and document.documentElement.clientHeight always give the viewport dimensions, regardless of the dimensions of the <html> element.

Measuring the element

document.documentElement.offsetWidth and document.documentElement.offsetHeight will give size.

Event coordinates

There are the event coordinates. When a mouse event occurs, no less than five property pairs are exposed to give you information abou the exact place of the event. For our discussion three of them are important:

  • pageX/pageY gives the coordinates relative to the <html> element in CSS px.

  • clientX/Y gives the coordinates relative to the viewport in CSS px.

  • screenX/Y gives the coordinate relative to the screen in DEVICE px.

You’ll use pageX/Y 90% of the time. Usually you want to know the event position relative to the document.

The other 10% of the time you’ll use clientX/Y

You never ever need to know the event coordiantes relative to the screen.

Media queries

There are two relavant media queries: width/height and device-width/device-height

  • width/height uses the same values as document.Element.clientWidth/clientHeight, namely the viewport, it works with CSS px.

  • device-width/device-height uses the same values as screen.width/height with device px.

The problem of mobile browser

Let’s go back to our sidebar with width:10%, if mobile browsers would do exactly the same as desktop browser, they’d make the element about 40px (if the device-width is 400px). and that’s too narrow. Your liquid layout would look horribly squashed.

Two viewports

The viewport is too narrow to serve as a basis for your CSS layout. The obvious solution is to make the viewport wider. That however requires it to be split into two: the visual viewport and the layout viewport.

A simple explanation at StackOverFlow:

Imagine the layout viewport as being a large image which does not change size or shape. Now image you have a smaller frame through which you look at the large image. The small frame is surrounded by opaque material which obscures your view of all but a portion of the large image. The portion of the large image that you can see through the frame is the visual port. You can back away from the large image while holding your frame(zoom out) to see the entire image at once, or you can move closer(zoom in) to see only a portion. You can also change the orientation of the frame, but the size and shape of the large image (layout viewport) never changes.

The visual viewport is the part of the page that’s currently shown on-screen.

The user may scroll to change the part of the page he sees, or zoom to change the size of the visual viewport.

However, the CSS layout, especially percentual widths, are calculated relative to the layout viewport, which is considerably wider than the visual viewport.

Thus the <html> element tabkes the width of the layout viewport initially, and your CSS is interpreted as if the screen were significantly wider than the phone screen. This makes sure that your site’s layout behaves as it does on a desktop browser.

How wide is the layout viewport? That differs per browser.

  • Safari uses 980px

  • Opera uses 850px

  • Android 800px

  • IE 974px

Understanding the layout viewport

In order to understand the size of the layout viewport we have to take a look at what happens when the page if fully zoomed out. Many mobile browsers initially show any page in fully zoomed-out mode.

The point is: browser have chosen their dimensions of the layout viewport such that it completely covers the screen if fully zoomed-out mode(equal to the visual viewport)

Thus the width and the height of the layout viewport are equal to whatever can be shown on the screen in the maximally zoomed-out mode. When the user zooms in these dimensions stay the same.

The layout viewport width is always the same, if you rotate your phone, the visual viewport changes, but the browser adapts to this new orientation by zooming in slightly so that the layout viewport is again as wide as the visual viewport.

This has consequences for the layout viewport’s height, which is now substantially less than in portrait mode. But web developers don’t care about the height, only about the width.

Measuring the layout viewport

document.documentElement.clientWidth and document.documentElement.clientHeight contain the layout viewport’s dimensions.

The orientation matters for the height, but not for the width.

Measuring the visual viewport

As to the visual viewport, it is measured by window.innerWidth/innerHeight. Obviously the measurements change when the user zooms out or in, since more or fewer CSS px fit into the screen.

The Screen

As on desktop, screen.width/height gives the screen size, in device pixels. As on the desktop, you never need this information as a web developer.

Visual viewport position relative to Layout viewport

html element

Just as on desktop, document.documentElemetn.offsetWidth/offsetHeight gives the total size of the <html> element in CSS px.

Meta Viewport

1
<meta name="viewport" content="width=320">

It is meant to resize the layout viewport.

Suppose you build a simple page and give your elemnet no width. Now they stretch up to take 100% of the width of the layout viewport. Most browsers zoom out to show the entire layout viewport on the screen, giving an effect like this

All user will immediately zoom in, which works, but most browsers keep the width of the elements intact, whcih makes the text hard to read.

Now what you can try is setting html {width: 320px} then

When you set

1
<meta name="viewport" content="width=320">

You set the width of the layout viewport to 320px.

Of course now we use

1
<meta name="viewport" content="width=device-width">

to adapt to various browsers.

Utils in Axios

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
// global toString
var toString = Object.prototype.toString

// isArray
function isArray (val) {
return toString.call(val) === '[object Array]'
}

// isArrayBuffer
function isArrayBuffer (val) {
return toString.call(val) === '[object arrayBuffer]'
}

// isFormData
function isFormData (val) {
return (typeof FormData !== 'undefined' && (val instanceof FormData))
}

// isString
function isString (val) {
return typeof val === 'string'
}

// isNumber
function isNumber (val) {
return typeof val === 'number'
}

// isUndefinded
function isUndefined (val) {
return typeof val === 'undefined'
}

// isObject
function isObject (val) {
return val !== null && typeof val === 'object'
}

// isDate
function isDate (val) {
return toString.call(val) === '[object Date]'
}

// isFile
function isFile (val) {
return toString.call(val) === '[object File]'
}

// isBlob
function isBlob (val) {
return toString.call(val) === '[object Blob]'
}

// isFunction
function isFunction (val) {
return toString.call(val) === '[object Function]'
}

// isStream
function isStream (val) {
return isObject(val) && isFunction(val.pipe)
}

// isURLSearchParams
function isURLSearchParams (val) {
return typeof URLSearchParams !== 'undefined' && val instanceof URLSearchParams
}

// trim
function trim (val) {
return str.replace(/^\s*/, '').replace(/\s*$/, '')
}

// isStandardBrowserEnv
function isStandardBrowserEnv () {
if (typeof navigator !== 'undefined' && navigator.product === 'ReactNative') {
return false
}
return {
typeof window !== 'undefined'
&& typeof document !== 'undefined'
}
}

/**
* Iterate over an Array or an Object invoking a function for each item
*/

function forEach (obj, fn) {
if (obj === null || typeof obj === 'undefined') {
return
}
if (typeof obj !== 'object' && !isArray(obj)) {
obj = [obj]
}
if (isArray(obj)) {
for (let i = 0, l = obj.length; i < l; i++) {
fn.call(null, obj[i], i, obj)
}
} else {
for (let key in obj) {
if (Object.prototype.hasOwnProperty.call(obj, key)) {
fn.call(null, obj[key], key, obj)
}
}
}
}

Concept of PWA

Advantages of Progressive Web Apps:

  • Reliable - Load instantly and never show the dinasaur.

  • Fast - Respond quickly to user interactions with silky smooth animations.

  • Enaging - Feel like a natural app on the device, with immersive user experience.

What is a Progressive Web App

  • Progressive - Works for every user, regardless of browser choice because it’s built with progressive enhancement as a core tenet.

  • Responsive - Fits any form factor: desktop, mobile, tablet, or whatever is next.

  • Connectively independent - Enhanced with service workers to work offline or on low-quality networks.

  • App-like - Feels like an app, because the app shell model seperates the application functionality from application content.

  • Fresh - Always up-to-date thanks to the service worker update process.

  • Safe - Served via HTTPS to prevent snooping and to ensure content hasn’t been tampered with.

  • Discoverable - Is identifiable as an ‘application’ thanks to W3C manifest and service worker registration scope, allowing search engines to find it.

  • Re-engagable - Makes Re-engagement easy through features like push notification.

  • Installable - Allows users to add apps they find most useful to their home screen without the hassle of an app store.

  • Linkable - Easily share the application via URL, doesn’t require complex installation.

What is App Shell

The app’s shell is the minimal HTML, CSS, JavaScript that is required to power the user interface of a progressive web app and is one of the components that ensures reliably good performance. Its first load should be extremely quick and immediately cached.

‘Chched’ means that the shell files are loaded once over the network and then saved to the local device. Every subsequent time that the user opens the app, the shell files are loaded from the local device’s cache, which results in blazing-fast startup times.

App shell architecture seperates the core application infrastructure and UI from the data. All of the UI and infrastructure is cached locally using a service worker so that on subsequent loads, the PWA only needs to retrieve the necessary data, instead of having to load everything.

A service worker is a script that your browser runs in the background, seperate from a web page, opening the door to features that don’t need a web page or user interaction.

The app shell is similar to the bundle of code that you’d publish to an app store when building a native app. It is the core components necessary to get your app off the ground, but likely doesn’t contain the data.

Using the app shell architecture allows you to focus on speed, giving the PWA similar properties to native apps:

  • instant loading

  • regular updates

Implement App Shell

Create the HTML for the App Shell

The Components consist of:

  • Header with a title, and app/refresh button

  • Container for forecast cards

  • A forecast card template

  • A dialog for adding new cities

  • A loading indicator

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Weather PWA</title>
<link rel="stylesheet" type="text/css" href="styles/inline.css">
</head>
<body>

<header class="header">
<h1 class="header__title">Weather PWA</h1>
<button id="butRefresh" class="headerButton"></button>
<button id="butAdd" class="headerButton"></button>
</header>

<main class="main">
<div class="card cardTemplate weather-forecase" hidden>
...
</div>
</main>

<div class="dialog-container">
...
</div>

<div class="loader">
<svg viewBox="0 0 32 32" with="32" height="32">
<circle id="spinner" cx="16" cy="16" r="14" fill="none"></circle>
</svg>
</div>

<!-- insert link to app.js here -->
</body>
</html>

Notice the loader is visible by default. This ensures that the user sees the loader immediately as the page loads, giving them a clear indication that the content is loading.

Start with a fast load

Differentiating the first run

User preferences, like the list of cities a user has subscribed to, should be stored locally using IndexedBD or another fast storage mechanism. To simplify this code, here we use localStorage, which is not ideal for production apps because it is a blocking, synchronous storage mechanism that is potentially very slow on some device.

1
2
3
4
5
// Save list of cities to lcoalStorage
app.saveSelectedCities = function () {
var selectedCities = JSON.stringify(app.selectedCities)
localStorage.selectedCities = selectedCities
}

Next, let’s add the startup code to check if the user has any saved cities and render those

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
app.selectedCities = lcoalStorage.selectedCities
if (app.selectedCities) {
app.selectedCities = JSON.parse(app.selectedCities)
app.selectedCities.forEach(function (city) {
app.getForecast(city.key, city.label)
})
} else {
app.updateForecastCard(initialWeatherForecast)
app.selectedCities = [
{
key: initialWeatherForecast.key,
label: initialWeatherForecast.label,
},
app.saveSelectedCities()
]
}

Use service workers to pre-cache the App Shell

PWA has to be fast, and installable, which means that they work online, offline, and on intermittent, slow connections.

To achieve this, we need to cache our app shell using service worker, so that it’s always available quickly and reliably.

Features provided via service workers should be considered a progressive enhancement, and added only if supported by the browser.

Register the service worker if it’s available

The first step to making the app work offline is to register a service worker, a script that allows background functionality without the need of an open web page or user interaction.

This takes two simple steps:

  • Tell the browser to register the JavaScript file as the service worker

  • Create a JavaScript file containing the service worker

First, we need to check if the browser supports service worker, and if it does, register the service worker. Add the following code to app.js

1
2
3
4
5
6
7
if ('serviceWorker' in navigator) {
navigator.serviceWorker
.register('./service-worker.js')
.then(function () {
console.log('Service Worker Registered')
})
}

Cache the site assets

When the service worker is registered, an install event is triggered the first time the user visits the page.

In this event handler, we will cache all the assets that are needed for the application.

When the service worker is fired, it should open the caches object and populate it with the assets necessary to load the App Shell. Create a file called service-worker.js in your application root folder. This file must live in the application root because the scope for service worker is defined by the directory in which the file resides. Add this code to your new service-worker.js file.

1
2
3
4
5
6
7
8
9
10
11
12
var cacheName = 'weatherPWA'
var filesToCache = []

self.addEventListener('install', function (e) {
console.log('[ServiceWorker] Install')
e.waitUntil(
caches.open(cacheName).then(function (cache) {
console.log('[ServiceWorker] Caching app shell')
return cache.addAll(filesToCache)
})
)
})

First, we need to open the cache with caches.open() and provide a cache name. Providing a cache name allows us to version files, or separate data from the app shell so that we can easily update one but not affect other.

Once the cache is open, we can then call cache.addAll(), which takes a list of URLs, then fetches them from the server and adds the response to the cache. Unfortunately, cache.addAll() is atomic, if any of the files fail, the entire cache step fails.

DevTools can debug service workers. Before reloading your page, open up DevTools, go the Service Worker pane on the Application panel.

When you see a blank page like this, it means that the currently open page doesn’t have any registered service workers.

Now reload your page. The Service Worker pane should look like this.

When you see information like this, it means the page has a service worker running.

Let’s add some logic on the activate event listener to update the cache.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
self.addEventListener('activate', function (e) {
console.log('[ServiceWorker] Activate')
e.waitUntil(
caches.keys().then(function (keyList) {
return Promise.all(keyList.map(function (key) {
if (key !== cacheName) {
console.log('[ServiceWorker] Removing old cache', key)
return caches.delete(key)
}
}))
})
)
return self.clients.claim()
})

This code ensures that your service worker updates its cache whenever any of the app shell files change. In order for this to work, you’d need to increment the cacheName variable at the top of your service worker file.

When the app is complete, self.clients.claim() fixes a corner case in which the app wasn’t returning the latest data. You can reproduce the corner case by commenting out the line below and then doing the following steps:

  • load app for first time so that the initial City data is shown

  • press the refresh button on the app

  • go offline

  • reload the app

You expect to see the newer data, but you actually see the initial data. This happens because the service worker is not yet activated. self.clients.claim() essentially lets you activate the service worker faster.

Finally, let’s update the list of files required for the app shell. In the array, we need to include all of the files our app needs, including images, js, css, etc.

1
2
3
4
5
6
7
8
var filesToCache = [
'/',
'/index.html',
'/scripts/app.js',
'/styles/inline.css',
'/images/clear.png',
// ...
]

Serve the app shell from the cache

Service workers provide the ability to intercept requests made from our PWA and handle them within the service worker. That means we can determine how we want to handle the request and potentially serve our own cached response.

1
2
3
4
5
6
7
8
self.addEventListener('fetch', function (e) {
console.log('[ServiceWorker] Fetch', e.request.url)
e.responseWith(
caches.match(e.request).then(function (response) {
return response || fetch(e.request)
})
)
})

Stepping from inside, out, caches.match() evaluates the web request that triggered the fetch event, and checks to see if it’s available in the cache. It then either responds with the cached version, or uses fetch to get a copy from the network. The response is passed back to the web page with e.responseWith()

Beware of the edge cases

This code must not be used in production because of the many unhandled edge cases

  • Cache depends on updating the cache key for every change

  • Requires everything to be redownloaded for every change

  • Browser cache may prevent the service worker cache from updating

  • Beware of cache-first strategies in production

Simple Guide of Jest

Install

1
yarn add --dev jest

Simple Case

1
2
3
4
5
6
// sum.js
function sum (a, b) {
return a + b
}

module.exports = sum
1
2
3
4
5
6
7
8
// sum.test.js
const test = require('jest')
const expect = require('expect')

const sum = require('./sum')
test('adds 1 + 2 to equals 3', () => {
expect(sum(1, 2).toBe(3))
})

With Babel

1
yarn add --dev babel-jest
1
2
3
4
5
6
7
8
9
10
// .babelrc
{
"presets": [
[ "env", {
"targets": {
"node": "current"
}
}]
]
}

Jest will automatically define NODE_ENV as test. It will not use development section like Babel does by default when no NODE_ENV is set.

babel-jest is automatically installed when installing Jest and will automatically transform files if a babel configuration exists in your project. To avoid this, you can explicitly reset the transform configuration option:

In package.json

1
2
3
4
5
{
"jest": {
"transform": {}
}
}

Globals

In your test files, Jest puts each of its methods and objects into the global environment. You don’t need to require or import anything to use them.

Methods

  • afterAll(fn): Runs a function after all the tests in this file have completed. If the function returns a promise, Jest waits for that promise to resolve before continuing. This is often useful if you want to clean up some global setup state that is shared across tests.

If afterAll is inside a describe block, it runs at the end of the describe block.

  • afterEach(fn): Runs a function after each test in this file. If the function returns a promise, Jest waits for the promise to resolve before continuing. This is often useful if you want to clean up some temprary state that is created by each test.

If afterEach is inside a describe block, it only runs after the tests that are inside this describe block.

  • beforeAll(fn): similar to afterAll()

  • beforeEach(fn): similar to afterEach()

  • describe(name, fn): create a block that groups together several related tests in one ‘test suite’. For example, if you have a myBeverage object that is supposed to be delicious but not sour, you could test it with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const myBeverage = {
delicious: true,
sour: false,
}

describe('my beverage', () => {
test('is delicious', () => {
expect(myBeverage.delicious).toBeTruthy()
})

test('is not sour', () => {
expect(myBeverage.sour).toBeFalsy()
})
})

This isn’t required - you can just write the test block directly at the top level.

  • describe.only(name, fn): You can use describe.only if you want to run only one describe block
1
2
3
4
5
6
7
8
9
describe.only('my beverage', () => {
test('is delicious', () => {
expect(myBeverage.delicious).toBeTruthy()
})
})

describe('my other beverage', () => {
// ...will be skipped
})
  • describe.skip(name, fn): you can use describe.skip if you do not want to run a particular describe block:
1
2
3
4
5
6
7
8
9
describe('my beverage', () => {
test('is delicious', () => {
expect(myBeverage.delicious).toBeTruthy()
})
})

describe.skip('my other beverage', () => {
// ... will be skipped
})
  • require.requireActual(moduleName): returns the actual module instead of a mock, bypassing all checks on whether the module should receive a mock implementation or not.

  • require.requireMock(moduleName): returns a mock module instead of the actual module, bypassing all checks on whether the module should be required normally or not.

  • test(name, fn): Also under the alias it(name, fn), all you need in a test file is that the test method which runs a test. For example, let’s say there’s a function inchesOfRain() that should be zero. Your whole test could be:

1
2
3
test('did not rain', () => {
expect(inchesOfRain()).toBe(0)
})

The first argument is the test name; the second argument is a function that contains the expectations to test

If a promise is returned from test, Jest will wait for the promise to resolve before letting the test complete.

1
2
3
4
5
test('has lemon in it', () => {
return fetchBeverageList().then(list => {
expect(list).toContain('lemon')
})
})

Even though the call to test will return right away, the test doesn’t complete until the promise resolves as well.

  • test.only(name, fn): similar to describe.only()

  • test.skip(name, fn): similar to describe.skip()

Webpack Code Splitting - Async

Currently a ‘function-like’ import() module loading syntax proposal is on the way into ECMAScript.

The ES6 Defines import() as method to load ES6 modules dynamically onruntime.

Webpack treats import() as a split-point and puts the requested module in a seperate chunk. import() takes the module name as argument and returns a Promise: import(name) => Promise

1
2
3
4
5
6
7
function determineDate () {
import('moment').then((moment) => {
console.log(moment().format())
}).catch(e => console.log('failed'))
}

determineDate()

Note that the fully dynamic statement, such as import(foo) will fail because webpack requries at least some file location information. This is because foo could potentially be any path to any file in your system or project. The import() must contain at least some information about where the module is located. so bundling can be limited to a specific directory or set of files.

Chunk Name

Since webpack 2.4.0, chunk names for dynamic imports can be specified using a “magic comment”

1
import(/* webpackChunkName: 'my-chunk-name' */ 'module')

Since webpack 2.6.0, the placeholder [request], [index] are supported:

1
2
3
4
5
// will generate files like `i18n/namespace-i18n-bundle-en.json`
import(/* webpackChunkName: 'i18n/[request]' */ `i18n/${namespace}-i18n-bundle-${language}.json`)

// will generate files `i18n-0`, `i18n-1`
import(/* webpackChunkName: 'i18n-[index]' */ '`i18n/${namespace}-i18n-bundle-${language}.json`')

import mode

Since webpack 2.6.0, different modes for resolving dynamic imports can be specified:

1
import(/* webpackMode: lazy */ `i18n/${namespace}-i18n-${language}.json`)
  • lazy: The default behavior. Lazy generates a chunk per request. So everything is lazy loaded.

  • lazy-once: Only available for imports with expression. Generate a single chunk for all possible requests. So the first request causes a network request for all modules, all following requests are already fulfilled.

  • eager: Eager generates no chunk. All files are included in the current chunk. No network request is required to load the files. It still returns a Promise, but it’s already resolved.

You can combine both options ( webpackChunkName and webpackMode ), it’s parsed a JSON5 object without curly brackets:

1
import(/* webpackMode: 'lazy-once', webpackChunkName: 'all-i18n-data' */ `i18n/${namespace}-i18n-${lanaguage}.json`)

Usage with Babel

If you want to use import with Babel, you’ll need to install the syntax-dynamic-import plugin while it’s still Stage 3 to get around the parser error.

1
yarn add --dev babel-core babel-loader babel-plugin-syntax-dynamic-import babel-preset-es2015
1
2
3
4
function determineDate () {
import('moment').then(moment => moment().format()).then(str => console.log(str)).catch(e => console.log(e))
}
determineDate()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
module.exports = {
entry: './index',
output: {
filename: 'app.js'
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel-loader'
}
]
}
}
1
2
3
4
{
"presets": [["es2015", { "modules": false }]],
"plugins": ["syntax-dynamic-import"]
}

Diabled default import in es6.

Not using the syntax-dynamic-import plugin will fail the build with

1
Module build failed: SyntaxError: 'import' and 'export' may only appear at the top level

or

1
Module build failed: SyntaxError: Unexpected token, expected \{

Usage with Babel and async / await

To use es7 async/await with import()

1
yarn add --dev babel-plugin-transform-async-to-generator babel-plugin-trasnform-regenerator babel-plugin-transform-runtime babel-plugin-syntax-async-functions
1
2
3
4
5
6
async function determineDate () {
const moment = await import('moment')
return moment().format()
}

determineDate().then(str => console.log(str))
1
2
3
4
5
6
7
8
9
10
{
"presets": [["es2015", { "modules": false }]],
"plugins": [
"syntax-async-functions",
"syntax-dynamic-import",
"transform-async-to-generator",
"trasnform-regenerator",
"transform-runtime"
]
}

import() imports the entire module namespace

Note that the promise is resolved with the module namespace. Consider the following two examples:

1
2
3
4
5
// Example 1: top-level import
import * as Component from './component'

// Example 2: Code Splitting With Import()
import('./component').then(Component => /* ... */)

Component in both of the cases resolves to the same thing, meaning in the case of using import() with ES2015 moduels you have to explicitly access default and named exports:

1
2
3
4
5
6
async function main () {
// Destructing Example
const { default: Component } = await import ('./component')
// Inline example
render((await import('./component')).default)
}

Mongodb Simple Guide

  • yarn add mongoose

  • import mongoose from 'mongoose'

  • const db = mongoose.conenct(MONGODB_URI)

1
2
3
4
5
6
7
8
9
10
11
import mongoose from 'mongoose'

const db = mongoose.connect(MONGODB_URI)

db.connection.on('open', () => {
console.log('connected')
})
db.connection.on('error', (err) => {
console.log(`db error: ${err}`)
process.exit
})

Schema

Schema defines the skeleton of minimal unit

1
2
3
4
5
6
7
import mongoose from 'mongoose'
const TestSchema = new mongoose.Schema({
name: { type: String },
age: { type: Number, default: 0 },
time: { type: Date, default: Date.now() },
email: { type: String, default: '' },
})

Primitive Types: String, Number, Boolean, null, Array, Document, Date

Model

Instance of Schema, it has ability to operate db

1
2
const db = mongoose.connect(MONGODB.URI)
const testModal = db.model('test1', TestSchema)
  • test1: Name of Collection in the DB

Entity

Instance of Model, it has ability to operate db

1
2
3
4
5
6
7
8
const TestEntity = new TestModel({
name: 'Lenka',
age: 35,
email: 'lenka@gmail.com',
})

console.log(TestEntity.name) // Lenka
console.log(TestEntity.age) // 35

Collection

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import mongoose from 'mongoose'
const db = mongoose.connect(MONGODB.URI)
const TestSchema = new mongoose.Schema({
name: { type: String },
age: { type: Number, default: 0 },
email: { type: String },
time: { type: Date, default: Date.now() },
})

const TestModel = db.model('test1', TestSchema)
const TestEntity = new TestModel({
name: 'hello world',
age: 12,
email: 'helloworld@gmail.com',
})
TestEntity.save((err, doc) => {
if (err) => {
console.log('error: ' + err)
} else {
console.log(doc)
}
})

Find

1
obj.find(condition, field, callback)

if fields is omited, or null, the docs will return all attributes

1
2
3
4
5
6
7
8
Model.find({ 'age': 25 }, (err, docs) => {
if (err) return console.log(err)
console.log(docs)
})

Model.find({}, { name: 1, age: 1, _id: 0 }, (err, docs) => {
// the specific attributes will be returned if corresponding fields are set to positive(here they are name and age), and _id is default to be displayed, if you want to omit it, you should set it to be 0.
})

findOne

Same as find, except it will return only one matched

1
2
3
TestModel.findOne(condition, fields, (err, doc) => {
// ...
})

findById

Same as findOne, but it only find by _id

1
2
3
TestModel.findById('obj._id', (err, doc) => {
// ...
})

Create

1
Model.create({}, callback)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Model.create({
name: 'test',
age: 12,
}, (err, doc) => {
if (err) return console.log(err)
console.log(doc)
})

TestModel.create([
{ name: 'test1', age: 20 },
{ name: 'test2', age: 20 },
{ name: 'test3', age: 20 },
{ name: 'test4', age: 20 },
{ name: 'test5', age: 20 },
{ name: 'test6', age: 20 },
{ name: 'test7', age: 20 },
{ name: 'test8', age: 20 },
{ name: 'test9', age: 20 },
])

Save

1
Entity.save({}, callback)
1
2
3
4
5
6
7
8
const Entity = new Model({
name: 'entity_save'
})

Entity.save((err, doc) => {
if (err) return console.log(err)
console.log(doc)
})

Model.create

Entity.save

Update

1
obj.update({}, callback)
1
2
3
4
5
6
7
8
const conditions = { name: 'test_upate' }

const update = { $set: { age: 16 } }

TestModel.update(conditions, update, (err) => {
if (err) return console.log(err)
console.log('success')
})

Remove

1
obj.remove({}, callback)
1
2
3
4
5
6
const conditions = { name: 'tim' }

TestModel.remove(conditions, (err) => {
if (err) return console.log(err)
console.log('success')
})

Advanced Search

  • $lt: less than

  • $lte: less than or equal to

  • $gt: greater than

  • $gte: greater than or equal to

  • $ne: not equal to

  • $in: belong to

  • $or: or

  • $exists: exist

  • $all

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Model.find({ "age": { "$gt": 18, "$lt": 50 } }, (err, docs) => {
// ...
})

Model.find({ "age": { "$in": [20, 30] } }, (err, docs) => {
// ...
})

Model.find({
"$or": [
{ "name": "yaya" },
{ "age": 28 },
]
}, (err, docs) => {
// ...
})

Model.find({
name: "$exists"
}, (err, docs) => {
// ...
})

Limit

1
2
3
find(condition, { limit: 20 }, function(err, docs) => {
// ...
})
1
2
3
Model.find({}, null, { limit: 20 }, (err, docs) => {
// ...
})

Skip

Skim first n docs

1
2
3
find({}, null, { skip: 4 }, (err, docs) => {
// ...
})

if total docs less than 4, it will output nothing

Sort

-1: descending, 1: ascending

1
2
3
Model.find({}, null, { sort: { age: -1 } }, (err, docs) => {
// ...
})

ObjectId

The default id _id could be any types in mongodb, and default to be ObjectId

ObjectId, is a 12-types BSON String.

  • 4 byte: UNIX TimeStamp

  • 3 byte: Represents the OS where mongodb running on

  • 2 byte: Represents the process where this _id in on

  • 3 byte: Rnadom Number

Schema add Attribute

1
2
3
4
5
6
7
import mongoose from 'mongoose'
const TestSchema = new mongoose.Schema
TestSchema.add({
name: 'String',
email: 'String',
age: 'Number',
})

Schema add instance method

1
2
3
4
5
6
7
8
9
10
11
import mongoose from 'mongoose'
const TextSchema = new mongoose.Schema({
name: 'String',
})

TestSchema.method('test', () => {
console.log('hah')
})
const say = mongoose.model('say', TestSchema)
var lenka = say()
lenka.say() // 'hah'

Schema add static method

`javascript import mongoose from ‘mongoose’ const db = mongoose.connect(MONGODB.URI) const TestSchema = new mongoose.Schema({ name: { type: String }, age: { type: Number }, })

TestSchema.static(‘findByName’, (name, cb) => { return this.find({name: name}, cb) })

const TestModel = db.model(‘test’, TestSchema) TestModel.findByName(‘tim’, (err, docs) => { // … })

TypeScript With Node

Configuring TypeScript Compilation

TypeScript uses the file tsconfig.json to adjust project compile options

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
"compilerOptions": {
"module": "commonjs",
"target": "es6",
"noImplicityAny": true,
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist",
"baseUrl": ".",
"paths": {
"*": [
"node_modules/*",
"src/types/*"
]
}
}
compilerOptions Description
“module”: “commonjs” The output module type (in your .js files). Node uses commonjs
“target”: “es6” The output language level. Node supports ES6
“noImplicityAny”: true Enables a stricter setting which throws errors when something has a default any value
“moduleResolution”: “node” TypeScript attempts to mimic Node’s module resolution strategy
“sourceMap”: true
“ourDir”: “dist” Location to output .js files after compilation
“baseUrl”: “.” Part of configuring module resolution
“paths”: {…} Part of configuring module resolution

The rest of the file define the TypeScript project context. The project context is basically a set of options that determine which files are compiled when the compiler is invoked with a specific tsconfig.json.

1
2
3
"include": [
"src/**/*"
]

include takes an array of glob pattern of files to include in the compilation.

Type Definition (.d.ts) Files

TypeScript uses .d.ts files to provide types for JavaScript libraries that were not written in TypeScript. This is great because once you have a .d.ts file, TypeScript can type check that library and provide you better help in your editor. The TypeScript community actively shares all the most up-to-date .d.ts files for popular libraries on a Github repository called DefinitelyTyped.

Because the "noImplicityAny": true, we are required to have a .d.ts file for every library used. You could set noImplicityAny to false to silence errors about missing .d.ts files. It’s a best practice to have a .d.ts file for every library(Even the .d.ts file is basically empty)

Installing .d.ts files from DefinitelyTyped

For the most part, you’ll find d.ts files for the libraries you are using on DefinitelyTyped. These .d.ts files can be easily installed into your project by using npm scope @types. For example, if we want the .d.ts file for jQuery, we can do so with npm install --save-dev @types/jquery.

Once .d.ts files have been installed using npm, you should see them in your node_modules/@types folder. The compiler will always look in this folder for .d.ts files when resolving JavaScript libraries.

What if a library isn’t on DefinitelyTyped?

Setting up TypeScript to look for .d.ts files in another folder

The Compiler knows to look in node_modules/@types by default, but to help the compiler find our own .d.ts files we have to configure path mapping in our tsconfig.json. Path mapping can get pretty confusing, but the basic idea is that the TypeScript compiler will look in specific places, in a specific order when resolving modules, and we have the ability to tell the compiler exactly how to do it. In the tsconfig.json for this project you’ll see the following:

1
2
3
4
5
6
"baseUrl": ".",
"paths": {
"*": [
"src/types/*"
]
}

This tells the TypeScript compiler that in addition to looking in node_module/@types for every import (*) also look in our own .d.ts file location <baseUrl> + src/types/*

First the compiler will look for a .d.ts file in node_modules/@types and then src/types

Summary of .d.ts management

In general if you stick to the following steps you should have minimal .d.ts issues:

  • After installing any npm package as a dependency or dev dependency, immediately try to install the .d.ts file via @types

  • If the library has a .d.ts file on DefinitelyTyped, the install wil succeed and you are done, if the install fails because the package doesn’t exist, generate .d.ts by yourself

Source Map

In the tsconfig.json

1
2
3
"compilerOptiosn": {
"sourceMaps": true
}

With this option enabled, next to every .js file that the TypeScript compiler outputs there will be a .map.js file as well. This .map.js file provides the information necessary to map back to the source .ts file while debugging.

Using Debugger in VS Code

When debugging in VS Code, it looks for a top level .vscode folder with a launch.json file. In this file, you can tell VS Code exactly what you want to do:

1
2
3
4
5
6
7
8
9
10
11
{
"type": "node",
"request": "launch",
"name": "Debug",
"program": "${workspaceRoot}/dist/server.js",
"smartStep": true,
"outFiles": [
"../dist/**/*.js"
],
"protocal": "inspector"
}

This is mostly identical to the “Node.js: Launch Program” template with a couple minor changes:

launch.json Options Description
“program”: “${workspaceRoot}/dist/server.js” Modified to point to our entry point in dist
“smartStep”: true Won’t step into code that doesn’t have a source map
“outFiles”: […] Specify where output files a dropped. Use with source map
“protocal”: “inspector” Use the new Node Debug Protocal because we’re on the latest node

Drag, Upload and Progress

Dragging and dropping files from your desktop to a browser is one of the ultimate goals for web application integration, which consists of:

  • enable file dragging and dropping onto a web page element

  • analyze dropped files in JavaScript

  • load and parse files on the client

  • asynchronously upload files to the server using XMLHttpRequest2

  • show a graphical progress bar while the upload occurs

  • use progressive enhancement to ensure your file upload from works in any browser

The File API

  • FileList: represents an array of selected files

  • File: represents an individual file

  • FileReader: an interface which allows us to read file data on the client and use it within JavaScript

JavaScript Events

Dragged Object

  • dragstart

  • drag

  • dragend

Target Object

  • dragenter

  • dragover

  • dragleave

  • drop

dataTransfer

  • dropEffect: copy | move | link | none

  • effectAllowed: copy | move | link | copyLink | copyMove | linkMove | none | all (default)

  • files

  • types

  • setDragImage(imgElement, x, y): set custom icon along dragging

  • setData(format, data)

  • getData(format)

  • clearData()

Notice

By default, brower will refuse all drag actions(and file will be opened in browser if files from desktop dragged into the browser), so e.preventDefault() should be added in dropover and drop event

Dragging Text will automatically set e.dataTransfer.setData('text/plain', node.innerText)

Dragging File will add files to e.dataTransfer.files

Simple Usage of flatMap

Original

Both map() and flatMap() take a function f as a parameter that controls how an input Array is translated to an output Array:

  • With map(), each input Array element is translated to exactly one output element, aka, f returns a single value

  • With flatMap(), each input Array element is translated to zero or more output elements, aka, f returns an Array of values.

An smiple implementation of flatMap:

1
2
3
4
5
6
7
8
9
10
11
12
function flatMap (arr, mapFunc) {
const result = []
for (const [index, value] of arr.entries()) {
const x = mapFunc(value, index, arr)
if (Array.isArray(x)) {
result.push(...x)
} else {
result.push(x)
}
}
return result
}

flatMap is simpler if mapFunc is only allowed to return Arrays, but we don’t impose this restriction here, because non-Array values are occasionally useful.

Filtering and mapping at the same time

1
2
3
4
5
6
7
8
9
10
11
12
13
14
function processArray (arr, processFunc) {
return arr.map(x => {
try {
return { value: processFunc(x) }
} catch (e) {
return { error: e }
}
})
}

const results = processArray(myArray, myFunc)

const values = flatMap(results, result => result.value ? [result.value] : []) // here we use [result.value] to avoid destructing value if it is an array.
const errors = flapMap(results, result => result.error ? result.error : [])

Mapping to multiple values

The Array method map() maps each input Array element to one output. But if we want to map it to multiple output elements?

That becomes necessary in the following example: The React component TagList is invoked with two attributes

1
<TagList tags={['foo', 'bar', 'baz']} handleClick={x => console.log(x)} />

The attributes are:

  • An Array of tags, each tag being a string

  • A callback for handling clicks on tags

TagList is rendered as a series of links seperated by commas:

1
2
3
4
5
6
7
8
9
10
11
class TagList extends React.Component {
render () {
const { tags, handleClick } = this.props
return flatMap(tags, (tag, index) => [
...(index > 0 ? [', '] : []),
<a key={index} href="" onClick={e => handleClick(tag, e)}>
{tag}
</a>
])
}
}

Here each tag (except the first) provide two elements in the rendered Array

Arbitrary Iterables

flatMap can be generalized to work with arbitrary iterables

1
2
3
4
5
6
7
function* flatMapIter(iterable, mapFunc) {
let index = 0
for (const x of iterable) {
yield* mapFunc(x, index)
index++
}
}

flatMapIter function works with Arrays:

1
2
3
4
function fillArray () {
return new Array(x).fill(x)
}
console.log([...flatMapIter([1,2,3], fillArray)])

Implementing flatMap via reduce

You can use the Array method reduce to implement a simple version of flatMap

1
2
3
4
5
6
function flatMap (arr, mapFunc) {
return arr.reduce(
(prev, x) => prev.concat(mapFunc(x)),
[],
)
}

Related to flatMap: flatten

flatten is an operation that concatenates all the elements of an Array

1
2
> flatten(['a', ['b', 'c'], ['d']])
['a', 'b', 'c', 'd']

It can be implemented as follows:

1
const flatten = (arr) => [].concat(...arr)

So the following expressions are equivalent

1
2
flatten(arr.map(func))
flatMap(arr, x => x)

New Babel Preset - Env

babel-preset-env is a new preset which let you specify an environment and automatically enables the necessary plugins.

At the moment, several presets let you determine what features Babel should support:

  • babel-preset-es2015, babel-preset-es2016, etc: incrementally support various versions of ECMAScript. babel-preset-es2015 transpiles what’s new in ES6 to ES5, babel-preset-es2016 transpiles what’s new in ES7 to ES6.

  • babel-preset-latest: supports all features that are either part of an ECMAScript version or at stage 4.

The problem with these presets is that they often do too much. For example, most modern browsers support ES6 generator. Yet if you use babel-preset-es2015, generator functions will always be transpiled to complex ES5 code.

babel-preset-env works like babel-preset-latest, but it lets you specify an environment and only transpiles features that are missing in that environment.

Note that you need to install and enable plugins and/or presets for experimental features(that are not part of babel-preset-latest)

On the plus side, you don’t need es2015 presets anymore.

Browsers

For browsers you have the option to specify either:

  • Browsers via browserslist query syntax

    • Support the last two versions of browsers and IE 7+

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      "babel": {
      "presets": [
      [
      "env",
      {
      "targets": {
      "browsers": ["last 2 versions", "ie >= 7"]
      }
      }
      ]
      ]
      }
    • Support browsers that have more than 5% market share

      1
      2
      3
      "targets": {
      "browsers": "> 5%"
      }
    • Fixed versions of browsers:

      1
      2
      3
      "targets": {
      "chrome": 56
      }

Node.js

If you compile your code for Node.js on the fly via Babel, babel-preset-env is especially useful, because it react to the currently running version of Node.js if you set the target node to current

1
2
3
4
5
6
7
8
9
10
11
12
13
14
"babel": {
"presets": [
[
"env",
{
"target": {
"targets": {
"node": "current"
}
}
}
]
]
}

Additional Options for babel-preset-env

modules(string, default: ‘commonjs’)

This options lets you configure to which module format ES6 modules are transpiled:

  • Transpile to popular module formats: ‘amd’, ‘commonjs’, ‘systemjs’, ‘umd’

  • Don’t transpile: false

include, exclude (Array of strings, default [])

  • include: always enables certain plugins

  • exclude: prevents certain plugins from being enabled

useBuiltIns (boolean, default: false)

Babel comes with a polyfill for new functionality in standard library. babel-preset-env can optionally import only those parts of the polyfill that are needed on the specified platforms:

There are two ways of using the polyfill:

  • core-js polyfills ES5, ES6+ as needed

    • install polyfill: yarn add core-js

    • activate polyfill: import 'core-js'

  • babel-polyfill: polyfills core-js and regenerator runtime(to emulate generators on ES5)

    • install polyfill: yarn add babel-polyfill

    • activate polyfill: import 'babel-polyfill'

Either of the two import statements is transpiled to an environment-specific sequence of more fine-grained imports:

1
2
3
4
5
import "core-js/modules/es7.string.pad-start";
import "core-js/modules/es7.string.pad-end";
import "core-js/modules/web.timers";
import "core-js/modules/web.immediate";
import "core-js/modules/web.dom.iterable";

debug (boolean, default: false)

Logs the following information via console.log()

  • Targeted environments

  • Enabled transformers

  • Enabled plugins

  • Enabled polyfills

Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"presets": [
["env",
{
"targets": {
"safari": 10
},
"modules": false,
"useBuiltIns": true,
"debug": true
}
]
]
}

Babel-Polyfill or Babel-Runtime

The babel-polyfill and babel-runtime modules are sued to serve the same function in two different ways. Both modules ultimately serve to emulate an ES6 environment.

Both babel-polyfill and babel-runtime emulate an ES6 environment with two things:

  • a slew of polyfills as provided by core-js

  • complete generator runtime

babel-polyfill accomplishes this task by assigning methods on the global or on native type prototypes which means that once required, as far as the javascript runtime you’re using is concerned, ES6 methods and object simply exist. If you were to require babel-polyfill in a script run under node v0.1.0 – a runtime which does not natively support the Promise API – your script would then have access to the Promise object. As far as you are concerned, you’re suddenly using an environment that support the Promise object.

babel-runtime does something very similar, but in a way that does not pollute native object prototypes or the global namespace. Instead, babel-runtime is a module that you can list as a dependency of your application like any other module, which polyfills ES6 methods. In other words and continuing the example from above, while you may not have the Promise object available to you, you now have the same functionality available to you from require('babel-runtime/core-js/promise'). By itself, this is useful but inconvenient. Fortunately, babel-runtime is not intended to be used by itself. Rather, babel-runtime is intended to be paired with the transform – babel-plugin-transform-runtime – which will automatically rewrite your code such that you can write your code using the Promise API and it will be transformed to use the Promise-like object exported by babel-runtime

babel-polyfill offers you the conveniences of gloablly defined objects without having to transform your code further. However, as with anything that mutates a global, this can introduce collision between versions, etc.

babel-runtime, on the other hand, will not suffer from collision as everything is name-spcaed. Since the module will be defined in your package.json, it can be versioned like everything else. The tradeoff, however, is that a transform can only do so much. The runtime remaps methods according to a definitions map. Anecdotally, this has covered each of my use-cases but there may be an obscure method or two which is not remapped. There are also certain cases where your intent is ambiguous. In such cases, the transform won’t know exactly what to do.

Conclusion

To summarize, with the general case for Babel 6, there are two main steps you’ll need to perform:

  • Provide your code with an emulated ES6 environment by either requiring babel-polyfill or requiring the babel-runtime module plus the babel-plugin-transform-runtime transform:
1
2
3
4
// for babel-polyfill, either add:
require('babel-polyfill')

// for babel-runtime, install the module, then use the babel-plugin-transform-runtime transform by including it in your .babelrc file.

Deep in Runtime-Transform

This plugin is recommended in a library/tool

Note: Instance methods such as 'foobar'.includes('foo') will not work since that would require modification of existing built-ins(Use babel-polyfill for that)

Babel uses very small helpers for common functions such as _extend. By default this will be added to every file that requires it. This duplication is sometimes unnecessary, especially when your application is spread out over multiple files.

This is where the transform-runtime plugin comes in: all of the helpers will reference the module babel-runtime to avoid duplication across your compiled output. The runtime will be compiled into your build.

Another purpose of this transformer is to create a sandboxed environment for your code. If you use babel-polyfill and the built-ins it provides such as Promise, Set and Map, those will pollute the global scope. While this might be ok for an app or a command line tool, it becomes a problem if your code is a library which you intend to publish for other to use or if you can’t exactly control the environment in which your code will run.

The transformer will alias these built-ins to core-js so you can use them seamlessly without having to require the polyfill.

Prod and Dev

In most cases, you should install babel-plugin-transform-runtime as a development dependency, and babel-runtime as a production dependency.

Usage

I prefer to use .babelrc

1
2
3
4
5
6
7
8
9
10
{
"plugins": [
["transform-runtime", {
"helpers": false,
"polyfill": false,
"regenerator": true,
"moduleName": "babel-runtime"
}]
]
}

There the options are referred.

  • helpers: boolean, default to true

Toggles whether or not inlined babel helpers (classCallCheck, extends, etc) are replaced with calls to moduleName.

  • polyfill: boolean, default to be true

Toggles whether or not new built-ins(Promise, Set, Map, etc) are transformed to use a non-global polluting polyfill.

  • renegerator: boolean, default to true

Toggles whether or not generator functions are transforms to use a regenerator runtime that does not pollute the global scope.

  • moduleName: string, default to babel-runtime

Set the name/path of the module used when importing helpers.

Example:

1
2
3
{
"moduleName": "flavortown/runtime"
}
1
import extends from 'flavortown/runtime/helpers/extend'

Technical Details

The runtime transformer plugin does three things:

  • Automatically requires babel-runtime/renegerator when you use generator/async functions;

  • Automatically requires babel-runtime/core-js and maps ES6 static methods and built-ins;

  • Removes the inline Babel helpers and uses the module babel-runtime/helpers instead.

You can use built-ins such as Promise, Set, Symbol, etc., as well as use all the Babel features taht require a polyfill seamlessly, without global pollution, making it extremely suitable for libraries.

Regenerator aliasing

1
2
3
function * foo () {

}

the following is generated

1
2
3
4
5
6
7
8
9
10
11
12
13
'use strict'

var _marked = [foo].map(regeneratorRuntime.mark)

function foo () {
return regeneratorRuntime.wrap(function foo$(_context) {
while (1) switch (_context_prev = _context.next) {
case 0:
case 'end':
return _context.stop()
}
}, _marked[0], this)
}

This isn’t ideal as then you have to include the regenerator runtime which pollutes the global scope.

Instead what the runtime transformer does it compile that to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
'use strict'

var _regenerator = require('babel-runtime/regenerator')
var _regenerator2 = _interopRequireDefault(_regenerator)

function _interopRequireDefault (obj) {
return obj && obj.__esModule ? obj : { default: obj }
}

var _marked = [foo].map(_regenerator2.default.mark)

function foo () {
return regeneratorRuntime.wrap(function foo$(_context) {
while (1) switch (_context_prev = _context.next) {
case '0':
case 'end':
return _context.stop()
}
}, _marked[0], this)
}

This means that you can use the regenerator runtime without polluting your current environment.

The same actions with Core-js Aliasing and Helper Aliasing.

More Faster React Functional Component

Original

An basic Avatar Component:

1
2
3
4
5
class Avatar extends React.Component {
render () {
return <img src={this.props.url} />
}
}

And its functional component style is:

1
const Avatar = ({ url }) => <img src={url} />

As you can see, it’s just a simple js function returning an element.

React still does a lot of stuff on functional components that, by nature, will never be used.

But we can skip React internals for these functional component.

They are just plain JavaScritp functions, which means we can call it in the render function.

1
2
3
4
5
6
7
ReactDOM.render(
<div>
{Avatar({ url: avatarUrl })}
<div>{commentBody}</div>
</div>,
mountNode,
)

As we know, the traditional usage:

1
<Avatar url={avatarUrl} />

will be compiled into

1
React.createElement(Avatar, { url: avatarUrl })

It will cost a lifecycle of a React Component.

But with direct calling of plain JavaScritp Function, all these consumption can be eliminated.

By the way, transform-react-inline-elements does the same as a bebel transform, so there’s no need to change the source code.

Css Shorthand Collection

1
2
3
4
5
6
7
8
9
10
body {
background:
url(...) /* image */
top center / 200px 200px /* position / size */
no-repeat /* repeat */
fixed /* attachment */
padding-box /* origin */
content-box /* clip */
red; /* color */
}
Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×