# Express Response Times Example

In this example we’ll create a server which has an index page that prints out ‘hello world’, and a page

http://localhost:3000/times which prints out the last ten response times that influxDB gave us.

The end result should look something like this:

Get started by installing and importing everything we need. This example requires Node 6.

Now create a new file app.js and start writing

Create a new influx client. We tell it to use the express_response_db database by default, and give it some information about the schema we’re writing. It can use this to be smarter about what data formats it writes and do some basic validation for us.

Now we have a working influx client!

We’ll make sure the database exists and boot the app.

Finally we’ll define the middleware and routes we’ll use. We have a generic middleware that records the time between when requests comes in, and the time we response to them. We also have another route called /times which prints out the last ten timings we recorded.

# koa-compress

Compress middleware for Koa

## Options

The options are passed to zlib

filter: An optional function that checks the response content type to decide whether to compress. By default, it uses compressible

threshold: Minimum response size in bytes to compress. Default 1024 byte or 1kb.

### koa-morgan

HTTP Request logger middleware for node.js

#### morgan(format, options)

Create a new morgan logger middleware function using the given format and options.

The format argument amay be a string of a predefined name, a string fo a format string, or a function that will produce a log entry.

The format function will be called with three arguments tokens, req and res, where tokens is object with all tokens, req is the HTTP request and res is the HTTP response. The function is expected to return a string that will be the log line, or undefined/null to skip logging.

#### predefined format string

combined: Standard Apache combined log output

common: Standard Apache common log output

dev

short

tiny

#### write logs to a file

Single file

Simple app that will log all request in the Apache combined format to the file access.log

### koa-session

Simple session middleware for koa. Default is cookie-based session and support external store.

Require Node 7.6 or greater for asycn/await support

#### Options

The cookie name is controlled by the key option, which default to ‘koa:sess’. All other options are passed to ctx.cookies.get() and ctx.cookies.set() allowing you to control security, domain, path and signing among other setting.

# Conceptual Overview

The first thing to understand about git rebase is that it solves the same problem as git merge. Both of these commands are designed to integrate changes from one branch into another branch – they just do it in very different ways.

Consider what happens when you start working on a new feature in a dedicated branch, then another team member updates the master branch with new commits. The results in a forked history which should be familiar to anyone who has used Git as a collaboration tool.

Now, let’s say that the new commits in master are relevant to the feature that you’re working on. To incorporate the new commits into your feature branch, you have two options: merginor rebasing.

# The Merge Option

The easiest option is to merge the master branch into the feature branch using something like the following:

Or you can condense this to a one-liner:

This creates a new merge commit in the feature branch that ties together the histories of the both branches, giving you a branch structure that looks like this:

Merging is nice because it’s a non-destructive operation. The existing branches are not changed in any way. This avoids all of the potential pitfalls of rebasing.

On the other hand, this also means that the feature branch will have an extraneous merge commit every time you need to incorporate upstream changes. If master is very active, this can pollute your feature branch history quite a bit. While it’s possible to migrate this issue with advanced git log options, it can make it hard to other developers to understand the history of the project.

# The Rebase Option

As an alternative to merge, you can rebase the feature branch onto master branch using the following commands:

This moves the entire feature branch to begin on the top of the master branch, effectively incorporating all of the new commits in master. But, instead of using a merge commit, rebasing re-writes the project history by creating brand new commits for each commit in the original branch.

The major benefit of rebasing is that you get a much cleaner project history. First, it eliminates the unnecessary merge commits required by git merge. Second, as you can see in the above diagram, rebasing also results in a perfectly linear project history – you can follow the top of feature all the way to the beginning of the project without any forks. This makes it easier to navigate your project with commands like git log.

# Interactive Rebasing

Interactive rebasing gives you the opportunity to alter commits as they are moved to the new branch. This is even more powerful than an automated rebase, since it offers complete control over the branch’s commit history. Typically, this is used to clean up a messy history before merging a feature branch into master.

To begin an interactive rebasing session, pass the -i option to the git rebase command.

This will open a text editor listing all of the commits that are about to be moved:

This listing defines exactly what the branch will look like after the rebase is performed. By changing the pick command and/or reordering the entries, you can make the branch’s history look like whatever you want.

For example, if the 2nd commit fixes a small problem in the 1st commit, you can condense them into a single comit with the fixup command:

When you save and close the file, Git will perform the rebase according to your instructions, resulting in project history that looks like the following:

Eliminating insignificant commits like this makes your feature’s history much easier to understand. This is something that git merge simply cannot do.

# The Golden Rule of Rebasing

The Golden Rule of git rebase is to never use it on public branches.

For example, think about what would happen if you rebased master onto your feature branch.

The rebase moves all of the commits in master onto the tip of feature. The problem is that only happened in your repo. All of the other developers are still working with the origin master. Since rebasing results in brand new commits, Git will think that your master branch’s history has diverged from everybody else’s.

The only way to synchronize the two master branches is to merge them back together, resulting in an extra merge commit and two sets of commits that contain the same changes.

# Type Compatibility in TypeScript

Type Compaibility in TypeScript is based on Structural Subtyping. Structural Typing is a way of relating types based solely on their members. This is in contrast with nominal typing. Consider the following code:

In nominally-typed language like C# or Java, the equivalnet code would be error because the Person class does not explicitly describe itself as being an implementor of the Named interface.

TypeScript’s structural type system was designed based on how JavaScript code is typically written. Because JavaScript widely uses anonymous objects like function expressions and object literals, it’s much more natural to represent the kinds of relationship found in JavaScript libraries with a structural type system instead of a nominal one.

The basic rule for TypeScript’s structural type system is the x is compatible with y if y has at least the same members as x.

To check whether y can be assigned to x, the compiler checks each property of x to find a corresponding compatible property in y. In this case, y must have a member called name that is a string, so the assignment is allowed.

The same rule for assignment is used when checking function call arguments

The Type Compatibility in Variable Assignment and Function Arguments.

# Comparing Two Functions

The check if x is assignable to y, we first look at the parameter list. Each parameter in x must have a corresponding parameter in y with a compabile type.

So x is assignable to y, but y is not assignable to x

Now let’s look at how return types are treated, using two functions that differs only by their return type:

The type system enforces that the source function’s return type be a subtype of the target type’s return type.

# Mannual of Ownership in Rust

Ownership is Rust’s most unique feature, and it enables Rust to make memory safety guarantees without needing a garbage collector. Therefore, it’s important to understand how ownership works in Rust. In this chapter we’ll talk about ownership as well as several related features: borrowing, slices, and how Rust lays data out in memory.

# What is Ownership

Rust’s central feature is ownership, although the feature is staightforward to explain, it has deep implications of the rest of the language.

All programs have to manage the way they use a computer’s memory while running. Some languages ahve garbage collecion that constantly looks for no longer used memory as the program runs. In other languages, the programmer must explicitly allocate and free the memory. Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks at compile time. No run-time costs are iccured for any of the ownership features.

Because ownership is a new concept for many programmers, it does take some time to get used to. The good news is that the more experienced you become with Rust and the rules of the ownership system, the more you’ll be able to naturally develop code that is safe and efficient.

# Ownership Rules

First, let’s take a look at the ownership rules.

1. Each Value in Rust has a variable that’s called its owner.
1. There can only be one owner at a time.
1. When the owner goes out of scope, the value will be dropped.

# Variable Scope

A scope is the range within a program for which an item is valid.

The variable s refers to a string literal, where the value of the string is hardcoded into the text of our program. The variable is valid from the point at which it’s declared until the end of the current scope.

• When s comes into scope it is valid

• It remains so until it goes out of scope

At this point, the relationship between scopes and the valid variables are valid is similar to other programming languages. Now we’ll build on top of this understanding by introducing the String type.

# The String Type

To illustrate the rules of ownership, we need a data type that is complex.

We’ll use String as the example here and concentrate on the parts of String that relate to ownership. These aspects also apply to other complex data types provided by the standard library and that you create.

We’ve already seen string literals, where a string value is hardcoded into our program. String literals are conveient, but they aren’t always suitable for every situation in which you want to use text. One reason is that they’re immutable. Another is that not every string value can be known when we write our code. For these situations, Rust has a second string type, String. This type is allocated on the heap and as such is able to store an amount of text that is unknown to us at compile time. You can create a String from a string literal using the from function, like so:

The double colon(::) is an operator that allows us to namespace this particular from function under the String type rather than using some sort of name like string_from.

This kind of string can be mutated:

So why can String be mutated but literals cannot.

# Memory and Allocation

In the case of a string literal, we know the contents at compile so the text is hardcoded directly in to the final executable, making string literals fast and efficient. But these properties only come from its immutability. Unfortunately, we can’t put a blob of memory into the binary for each piece of text whose size is unknown at compile time and whose size might change while running the program.

With the String type, in order to support a mutable, growable piece of text, we need to allocate an amount of memory on the heap, unknown at compile time, to hold the contents. This means:

• The memory must be requested from the operating system at runtime.

• We need a way of returning this memory to the operating system when we’re done with our String

That first part is done by us: when we call String::from, its implementation requests the memory if needs. This is pretty much universal in programming languages.

However, the second part is different. In languages with a garbage collector(GC), the GC keeps track and cleans up memory that isn’t being used anymore, and we, as the programmer, don’t need to think about it. Without a GC, it’s the programmer’s responsibility to identify when memory is no longer being used and call code to explicitly return it, just as we did to request it. Doing this correctly has historically been a difficult programming problem. If we forget, we’ll waste memory. If we do it too early, we’ll have an invalid vairable. If we do it twice, that’s a bug too. We need to pair exactly one allocate with exact one free

Rust takes a different path: the memory is automatically returned once the variable that owns it goes out of scope.

These is a natural point at which we can return the memory our String needs to operating system: when s goes out of scope. When a variables goes out of scope, Rust calls a special function for us. This function is called drop, and it’s where the author of String can put the code to return the memory. Rust calls drop automatically at the closing }.

Note: In C++, this pattern of deallocating resources at the end of an item’s lifetime is sometimes called Resource Acquisition Is Initialization(RAII). The drop function in Rust will be familiar to you if you’ve used RAII patterns.

This patterns has profound impact on the way Rust code is written. It may seem simple right now, but the behavior of code can be unexpected in more complicated situations when we want to have multiple variables use the data we’ve allocated ont the heap.

# Ways Variables and Data Interact: Move

Multiple variables can interact with the same data in different ways in Rust. Let’s look at an example using an integer.

Here we got two independent variables x and y, this is because integers are simple values with a known, fixed size, and two 5 are pushed onto the stack.

Now let’s look at the String version.

This looks very similar to the previous code, so we might assume that the way it works would be the same: that is, the second line would make a copy of the value in s1 and bind it to s2. But this isn’t quite what happens.

To explain this more thoroughly, let’s look at what String looks like under the covers in the Figure.

A String is made up of three parts, shown on the left: a pointer to the memory that holds the contents of the string, a length, and a capacity. This group of data is stored on the stack. On the right is the memory on the heap that holds the contents.

The length is how much memory, in bytes, the contents of the String is currently using. The capacity is the total amount of memory, in bytes, that the String has received from the operating system. The difference between length and capacity matters, but not in this context, so far now, it’s fine to ignore the capacity.

When we assign s1 to s2, the String data is copied, meaning we copy the pointer, the length and the capacity that are on the stack. We do not copy the data on the heap that the pointer refers to. In other words, the data representation in memory looks like the figure following.

Eariler, we said that when a variable goes out of scope, Rust automatically calls the drop function and cleans up the heap memory for the variable. But the fact is that both data pointers pointing to the same location. This is a problem: when s2 and s1 go out of scope, they will both try to free the same memory. This is known as a double free error. Freeing memory twice can lead to memory corruption, which can potentially lead to security vulnerabilities.

To ensure memory safety, there’s one more detail to what happens in this situation in Rust. Instead of trying to copy the allocated memory, Rust considers s1 to no longer be valid and therefore, Rust doesn’t need to free anything when s1 goes out of scope. Check out what happens when you try to use s1 after s2 is created.

You’ll get an error like this because Rust prevents you from using the invalidated reference:

This action in Rust, similar to shallow copy, it copy pointer, length and capacity in stack, but Rust also invalidates the first variable.

We call it move.

s1 has been invalidated.

That solves our problem, with only s2 valid, when it goes out of scope, it alone will freee the memory, and we’re done.

In addition, there’s a design choice that’s implied by this: Rust will never automatically create ‘deep’ copies of your data. Therefore, any automatic copying can be assumed to be inexpensive in terms of runtime performance.

# Ways Variables and Data Interact: Clone

If we do want to deeply copy the heap data of the String, not just the stack data, we can use a common method called clone.

This works just fine, the heap data does get copied.

When you see a call to clone, you know that some arbitrary code is being executed and that code may be expensive. It’s a visual indicator that something different is going on.

# Stack-Only Data: Copy

There’s another wrinkle we haven’t talked about yet. This code using integers.

This code runs correctly because those types like integers that have a known size at compile time are stored entirely on the stack, so copied of the actual values are quick to make. That means there’s no reason we would want to prevent x from being valid after we created the variable y. In other words, there’s no difference between deep and shallow copying here, so calling clone wouldn’t do anything differently from the usual shallow copying and we can leave it out.

Rust has a special annotation called the Copy trait that we can place on types like integers that are stored on stack.

If a type has the Copy trait, an older variable is still usable after the assignment. Rust won’t let us annotate a type with the Copy trait if the type, or any of its parts, has implemented the Drop trait. If the type needs something special to happen when the value goes out of scope and we add the Copy annotation to that type, we’ll get a compile time error.

As a general rule, any group of simple scalar values can be Copy, and nothing that requires allocation or is some form of resource is Copy. Here are some of the types that are Copy:

• All the integer types

• The boolean type

• All the floating point types

• Tuples, but only if they contain types a also Copy.

# Ownership and Functions

The semantics for passing a value to a function are similar to assigning a value to a variable. Passing a variable to a function will move or copy, just like assignment.

If we tried to use s after the call to take_ownership, Rust would throw a compile time error. These static checks protect us from mistakes.

# Return Values and Scope

Returning values can also transfer ownership.

It’s possible to return mutiple values using a tuple.

## Reference and Borrowing

Here is how you would define and use a calculate_length function that has a reference to an object as a parameter instead of taking ownership of the value.

First notice that all the tuple code in the variable declaration and the function return value is gone. Second, note that we pass &s1 into calculate_length, and in its definition, we take &String rather than String.

These ampersands(&) are reference, and they allow you to refer to some value without taking ownership of it.

In this figure, &String s pointing to String s1

Let’s take a closer look at the function call here:

The &s1 syntax lets us create a reference that refer to the value of s1, but does not own it.

Because it does not own it, the value it points to will not be dropped when the reference goes out of scope.

Likewise, the signature of the function uses & to indicate that the type of the paramter s is a reference.

The scope in which the variable s is valid is the same as any function parameter’s scope, but we don’t drop what the reference points to when it goes out of scope because we don’t have ownership.

Functions that have references as parameters instead of the actual values mean we won’t need to return the values in order to give back ownership, since we never had ownership.

We call having references as function paramters borrowing. As in real life, if a person owns something, you can borrow it from them, When you’re done, you have to give it back.

If we try to modify something we’re borrowing, it won’t work.

## Mutable References

First, we had to change s to be mut. Then we had to create mutable reference with &mut s and accept a mutable reference with some_string: &mut String.

But mutable references have one big restriction: you can only mutable reference to a particular piece of data in a particular scope.

This restriction allows for mutation but in a very controlled fashion. It’s something that new Rustaceans struggle with, because most languages let you mutate whenever you’d like. The benefit of having this restriction is that Rust can prevent data races at compile time.

A data race is a particular type of race condition in which these three behaviors occur:

• Two or more pointers access the same data at the same time.

• At least one of the pointers is being used to write to the data.

• There’s no mechanism being used to synchrnize access to the data.

Data races cause undefined behavior and can be difficult to diagnose and fix when you’re trying to track them down at runtime. Rust prevents this problem from happening because it won’t even compile code with data race.

As always, we can use curly brackets to create a new scope, allowing for multiple mutable references, just not simultaneous ones:

A similar rule exists for combining mutable and immutable references. This code results in an error:

We also cannot have mutable reference while we have an immutable one.

Users of an immutable reference don’t expect the values to suddently change out from under them.

## Dangling References

In languages with pointers, it’s easy to erroneously to create a dangling pointer, a pointer that references a location in memory that may have been given to someone else, by freeing some memory while preserving a pointer to that memory. In Rust, by contrast, the compiler guarantees the references will never be dangling references: if we have a reference to some data, the compiler will ensure that the data will not go out of scope before reference to the data does.

## The Rules of References

• At any given time, you can have either but not both of

• One mutable reference

• Any number of immutable references

• Referecences must always be valid.

## Slices

Another data type that doesn’t have ownership is the slice. Slices let you reference a configuous sequence of elements in a collection rather tahn the whole collection.

## String Slice

A string slice is a reference to part of a String, and looks like this:

This is similar to taking a reference to the whole String, but with the extra [0..5] but. Rather than a reference to the entire String, it’s a reference to an internal position in the String and the number of elements that it refers to.

We create slices with a range of [start_index..end_index], but the slice data structure acutally stores the starting position and the length of the slice.

So in the case of let world = &s[6..11];, world would be a slice that contains a pointer to the 6th byte of s and a length of value of 5.

With Rust’s .. range syntax, if you want to start at the first index(0), you can drop the value before the two periods. In other words

By the same token, if your slice includes the last byte of the String, you can drop the trailing number.

You can also drop both values to take a slice of the entire string.

## String Literals Are Slices

The type of s here is &str: it’s a slice pointing to the specific point of the binary. This is also why string literals are immutable; &str is an immutable reference.

## String Slices as Parameters

Knowing that you can take slices of literals and Strings leads us to one more improvement on first_word, and that’s its signature:

If we have a string slice, we can pass that directly. If we have a String, we can pass a slice of the entire String. Defining a function to take a string slcie instead of a reference to a String makes our API more general and useful without losing any functionality.

## Other Slices

String slices, as you might imagine, are specific to strings. But there’s a more general slice type, too.

This slice has the type &[i32]. It works the same way as string slices do, by storing a reference to the first element and the length.

# Basic Info of Web Worker

postMessage 方法可以接受字符串或 JSON 对象

index.js 中调用 postMessage() 时, worker.js 通过 message 时间处理消息, index.js 的有效信息在 e.data 上.

index.js 和 worker.js 传递的消息是幅值而不是共享(序列化+反序列化)

Example

# 适用于 worker 的功能

• navigator 对象

• location 对象

• XMLHttpResponse

• setTimeout() / clearTimeout() / setInterval() / clearInterval()

• 应用缓存

• 使用 importScripts() 方法导入外部脚本

• 生成其他 worker

worker 无法使用:

• DOM

• window 对象

• document 对象

• parent 对象

# 加载外部脚本

worker 可以通过 importScripts() 函数将外部脚本文件或库加载到 worker 中, 该方法采用零个或多个字符串表示要导入的资源名.

# 添加子 worker

• subWorker 必须托管在与福网页相同的来源中

• subWorker 中的 URI 应相对于 worker 的位置来解析

# Deep in Viewport

vw(viewport width) and vh(viewport height) are legnth units that represent exactly 1% of the size of any given viewport, regardless of its measurements. rem(short for root em) is also similar in its functionality, although it deals specifically with font sizes, and derives its name from the value fo the font size of the root element – which should default to 16 pixels on most browsers.

There are also a few more viewport units available for use, such as vmin and vmax which refer respectively to 1% of the viewport’s smaller dimension, and 1% of its larger dimension.

Interestingly enough, browsers actually calculate the entire browser window when it comes to width, meaning they factor the scrollbar into this dimension. Should you attempt to set the width of an element to a value of 100vw, it would force a horizontal bar to appear, since you’d be lightly stretching your viewport.

# Device Pixels and CSS Pixels

Device Pixels are the kind of pixels we intuitively assume to be ‘right’. These pixels give the formal resolution of whichever device you’re working on, and can be read out from screen.width/height.

If you give a certain element a width: 128px, and your monitor is 1024px wide, and you maximize your browser screen, the element would fit out on your monitor eight times.

If the user zooms, however, this calculation is going to change. If the user zooms to 200%, your element with width: 128px will fit only four times on this 1024px wide monitor.

Zooming as implemented in modern browsers consists of nothing more than ‘stretching up’ pixels. That is, the width of the elemtn is not changed from 128 to 256px, instead the actual pixels are doubled in size. Formally, the element still has width of 128 CSS pixels, even though it happens to take the space of 256px.

In other words, zooming to 200% makes one css pixels grow to four times the size of one device pixels(two times the width, two times the height)

A few images will clarify the concept. Here are four pixels on 100% zoom level. Here CSS px fully overlap with device px.

Let’s zoom out. The CSS pixels start to shrink, meaning that one device px now overlaps several CSS px.

If you zoom in, the opposite happens. The CSS px start to grow, and now one CSS px overlap several device px.

The point is that you are only interested in CSS px. It’s those px that dictate how your style sheet is rendered.

Device px are almost entirely useless to you.

At zoom level 100% one CSS px is exactly equal to Device px.

# Screen Size

screen.width and screen.height means the total width width and height of the user’s screen. These dimensions are measured in device px because they never change: they’re feature of the monitor and not of the browser.

# Window Size

window.innerWidth and window.innerHeight

Window Size is measured in CSS px.

# Scrolling Offset

window.pageXOffset and window.pageYOffset contains the horizontal and vertical scrolling offsets of the document. Thus you can find out how much the user has scrolled.

These properties measured in CSS px.

# Viewport

The function of the viewport is to constrain the element, which is the uppermost containing block of your site.

Suppose you have a liquid layout and one of your sidebar has width: 10%. Now the sidebar neatly grows and shrinks as your resize the browser window. How does it work.

Technically, what happens is that the sidebar gets 10% of the width of its parent. Let’s say the . Normally all block-level element take 100% of the width of their parent ().

So your sidebar get width of 10% of browser.

In theory, the width of the element is restricted by the width of the viewport. The element takes 100% of the width of that viewport.

The viewport, in turn, is exactly equal to the browser window: it’s been defined as such. The viewport is not an HTML construct, so you cannot influence it by CSS. It just has the width and height of the browser window – on desktop. On mobile it’s quite a bit more complicated.

White width: 100% works fine at 100% zoom, if we zoomed in the viewport has become smaller than the total width of the site, the content will spills out of the element, but that element has overflow: visible, which means that the spilled-out content will be show in any case.

# Measuring the viewport

The Viewport size can be found in document.documentElement.clientHeight and document.documentElement.clientWidth

If you know your DOM, you know that document.documentElement is in fact the <html> element: the root element of any HTML document. However, the viewport is one level higher, so to speak, it’s the element that contains the <html> element. That matters if you give the <html> element a width.

So document.documentElement.clientWidth and document.documentElement.clientHeight always give the viewport dimensions, regardless of the dimensions of the <html> element.

# Measuring the element

document.documentElement.offsetWidth and document.documentElement.offsetHeight will give size.

## Event coordinates

There are the event coordinates. When a mouse event occurs, no less than five property pairs are exposed to give you information abou the exact place of the event. For our discussion three of them are important:

• pageX/pageY gives the coordinates relative to the <html> element in CSS px.

• clientX/Y gives the coordinates relative to the viewport in CSS px.

• screenX/Y gives the coordinate relative to the screen in DEVICE px.

You’ll use pageX/Y 90% of the time. Usually you want to know the event position relative to the document.

The other 10% of the time you’ll use clientX/Y

You never ever need to know the event coordiantes relative to the screen.

### Media queries

There are two relavant media queries: width/height and device-width/device-height

• width/height uses the same values as document.Element.clientWidth/clientHeight, namely the viewport, it works with CSS px.

• device-width/device-height uses the same values as screen.width/height with device px.

# The problem of mobile browser

Let’s go back to our sidebar with width:10%, if mobile browsers would do exactly the same as desktop browser, they’d make the element about 40px (if the device-width is 400px). and that’s too narrow. Your liquid layout would look horribly squashed.

# Two viewports

The viewport is too narrow to serve as a basis for your CSS layout. The obvious solution is to make the viewport wider. That however requires it to be split into two: the visual viewport and the layout viewport.

A simple explanation at StackOverFlow:

Imagine the layout viewport as being a large image which does not change size or shape. Now image you have a smaller frame through which you look at the large image. The small frame is surrounded by opaque material which obscures your view of all but a portion of the large image. The portion of the large image that you can see through the frame is the visual port. You can back away from the large image while holding your frame(zoom out) to see the entire image at once, or you can move closer(zoom in) to see only a portion. You can also change the orientation of the frame, but the size and shape of the large image (layout viewport) never changes.

The visual viewport is the part of the page that’s currently shown on-screen.

The user may scroll to change the part of the page he sees, or zoom to change the size of the visual viewport.

However, the CSS layout, especially percentual widths, are calculated relative to the layout viewport, which is considerably wider than the visual viewport.

Thus the <html> element tabkes the width of the layout viewport initially, and your CSS is interpreted as if the screen were significantly wider than the phone screen. This makes sure that your site’s layout behaves as it does on a desktop browser.

How wide is the layout viewport? That differs per browser.

• Safari uses 980px

• Opera uses 850px

• Android 800px

• IE 974px

# Understanding the layout viewport

In order to understand the size of the layout viewport we have to take a look at what happens when the page if fully zoomed out. Many mobile browsers initially show any page in fully zoomed-out mode.

The point is: browser have chosen their dimensions of the layout viewport such that it completely covers the screen if fully zoomed-out mode(equal to the visual viewport)

Thus the width and the height of the layout viewport are equal to whatever can be shown on the screen in the maximally zoomed-out mode. When the user zooms in these dimensions stay the same.

The layout viewport width is always the same, if you rotate your phone, the visual viewport changes, but the browser adapts to this new orientation by zooming in slightly so that the layout viewport is again as wide as the visual viewport.

This has consequences for the layout viewport’s height, which is now substantially less than in portrait mode. But web developers don’t care about the height, only about the width.

# Measuring the layout viewport

document.documentElement.clientWidth and document.documentElement.clientHeight contain the layout viewport’s dimensions.

The orientation matters for the height, but not for the width.

# Measuring the visual viewport

As to the visual viewport, it is measured by window.innerWidth/innerHeight. Obviously the measurements change when the user zooms out or in, since more or fewer CSS px fit into the screen.

# The Screen

As on desktop, screen.width/height gives the screen size, in device pixels. As on the desktop, you never need this information as a web developer.

# html element

Just as on desktop, document.documentElemetn.offsetWidth/offsetHeight gives the total size of the <html> element in CSS px.

# Meta Viewport

It is meant to resize the layout viewport.

Suppose you build a simple page and give your elemnet no width. Now they stretch up to take 100% of the width of the layout viewport. Most browsers zoom out to show the entire layout viewport on the screen, giving an effect like this

All user will immediately zoom in, which works, but most browsers keep the width of the elements intact, whcih makes the text hard to read.

Now what you can try is setting html {width: 320px} then

When you set

You set the width of the layout viewport to 320px.

Of course now we use

# Advantages of Progressive Web Apps:

• Reliable - Load instantly and never show the dinasaur.

• Fast - Respond quickly to user interactions with silky smooth animations.

• Enaging - Feel like a natural app on the device, with immersive user experience.

# What is a Progressive Web App

• Progressive - Works for every user, regardless of browser choice because it’s built with progressive enhancement as a core tenet.

• Responsive - Fits any form factor: desktop, mobile, tablet, or whatever is next.

• Connectively independent - Enhanced with service workers to work offline or on low-quality networks.

• App-like - Feels like an app, because the app shell model seperates the application functionality from application content.

• Fresh - Always up-to-date thanks to the service worker update process.

• Safe - Served via HTTPS to prevent snooping and to ensure content hasn’t been tampered with.

• Discoverable - Is identifiable as an ‘application’ thanks to W3C manifest and service worker registration scope, allowing search engines to find it.

• Re-engagable - Makes Re-engagement easy through features like push notification.

• Installable - Allows users to add apps they find most useful to their home screen without the hassle of an app store.

• Linkable - Easily share the application via URL, doesn’t require complex installation.

## What is App Shell

The app’s shell is the minimal HTML, CSS, JavaScript that is required to power the user interface of a progressive web app and is one of the components that ensures reliably good performance. Its first load should be extremely quick and immediately cached.

‘Chched’ means that the shell files are loaded once over the network and then saved to the local device. Every subsequent time that the user opens the app, the shell files are loaded from the local device’s cache, which results in blazing-fast startup times.

App shell architecture seperates the core application infrastructure and UI from the data. All of the UI and infrastructure is cached locally using a service worker so that on subsequent loads, the PWA only needs to retrieve the necessary data, instead of having to load everything.

A service worker is a script that your browser runs in the background, seperate from a web page, opening the door to features that don’t need a web page or user interaction.

The app shell is similar to the bundle of code that you’d publish to an app store when building a native app. It is the core components necessary to get your app off the ground, but likely doesn’t contain the data.

Using the app shell architecture allows you to focus on speed, giving the PWA similar properties to native apps:

## Implement App Shell

### Create the HTML for the App Shell

The Components consist of:

• Header with a title, and app/refresh button

• Container for forecast cards

• A forecast card template

• A dialog for adding new cities

Notice the loader is visible by default. This ensures that the user sees the loader immediately as the page loads, giving them a clear indication that the content is loading.

#### Differentiating the first run

User preferences, like the list of cities a user has subscribed to, should be stored locally using IndexedBD or another fast storage mechanism. To simplify this code, here we use localStorage, which is not ideal for production apps because it is a blocking, synchronous storage mechanism that is potentially very slow on some device.

Next, let’s add the startup code to check if the user has any saved cities and render those

### Use service workers to pre-cache the App Shell

PWA has to be fast, and installable, which means that they work online, offline, and on intermittent, slow connections.

To achieve this, we need to cache our app shell using service worker, so that it’s always available quickly and reliably.

Features provided via service workers should be considered a progressive enhancement, and added only if supported by the browser.

#### Register the service worker if it’s available

The first step to making the app work offline is to register a service worker, a script that allows background functionality without the need of an open web page or user interaction.

This takes two simple steps:

• Tell the browser to register the JavaScript file as the service worker

• Create a JavaScript file containing the service worker

First, we need to check if the browser supports service worker, and if it does, register the service worker. Add the following code to app.js

#### Cache the site assets

When the service worker is registered, an install event is triggered the first time the user visits the page.

In this event handler, we will cache all the assets that are needed for the application.

When the service worker is fired, it should open the caches object and populate it with the assets necessary to load the App Shell. Create a file called service-worker.js in your application root folder. This file must live in the application root because the scope for service worker is defined by the directory in which the file resides. Add this code to your new service-worker.js file.

First, we need to open the cache with caches.open() and provide a cache name. Providing a cache name allows us to version files, or separate data from the app shell so that we can easily update one but not affect other.

Once the cache is open, we can then call cache.addAll(), which takes a list of URLs, then fetches them from the server and adds the response to the cache. Unfortunately, cache.addAll() is atomic, if any of the files fail, the entire cache step fails.

DevTools can debug service workers. Before reloading your page, open up DevTools, go the Service Worker pane on the Application panel.

When you see a blank page like this, it means that the currently open page doesn’t have any registered service workers.

Now reload your page. The Service Worker pane should look like this.

When you see information like this, it means the page has a service worker running.

Let’s add some logic on the activate event listener to update the cache.

This code ensures that your service worker updates its cache whenever any of the app shell files change. In order for this to work, you’d need to increment the cacheName variable at the top of your service worker file.

When the app is complete, self.clients.claim() fixes a corner case in which the app wasn’t returning the latest data. You can reproduce the corner case by commenting out the line below and then doing the following steps:

• load app for first time so that the initial City data is shown

• press the refresh button on the app

• go offline

You expect to see the newer data, but you actually see the initial data. This happens because the service worker is not yet activated. self.clients.claim() essentially lets you activate the service worker faster.

Finally, let’s update the list of files required for the app shell. In the array, we need to include all of the files our app needs, including images, js, css, etc.

#### Serve the app shell from the cache

Service workers provide the ability to intercept requests made from our PWA and handle them within the service worker. That means we can determine how we want to handle the request and potentially serve our own cached response.

Stepping from inside, out, caches.match() evaluates the web request that triggered the fetch event, and checks to see if it’s available in the cache. It then either responds with the cached version, or uses fetch to get a copy from the network. The response is passed back to the web page with e.responseWith()

### Beware of the edge cases

This code must not be used in production because of the many unhandled edge cases

• Cache depends on updating the cache key for every change

• Browser cache may prevent the service worker cache from updating

• Beware of cache-first strategies in production

# With Babel

Jest will automatically define NODE_ENV as test. It will not use development section like Babel does by default when no NODE_ENV is set.

babel-jest is automatically installed when installing Jest and will automatically transform files if a babel configuration exists in your project. To avoid this, you can explicitly reset the transform configuration option:

In package.json

## Globals

In your test files, Jest puts each of its methods and objects into the global environment. You don’t need to require or import anything to use them.

### Methods

• afterAll(fn): Runs a function after all the tests in this file have completed. If the function returns a promise, Jest waits for that promise to resolve before continuing. This is often useful if you want to clean up some global setup state that is shared across tests.

If afterAll is inside a describe block, it runs at the end of the describe block.

• afterEach(fn): Runs a function after each test in this file. If the function returns a promise, Jest waits for the promise to resolve before continuing. This is often useful if you want to clean up some temprary state that is created by each test.

If afterEach is inside a describe block, it only runs after the tests that are inside this describe block.

• beforeAll(fn): similar to afterAll()

• beforeEach(fn): similar to afterEach()

• describe(name, fn): create a block that groups together several related tests in one ‘test suite’. For example, if you have a myBeverage object that is supposed to be delicious but not sour, you could test it with:

This isn’t required - you can just write the test block directly at the top level.

• describe.only(name, fn): You can use describe.only if you want to run only one describe block
• describe.skip(name, fn): you can use describe.skip if you do not want to run a particular describe block:
• require.requireActual(moduleName): returns the actual module instead of a mock, bypassing all checks on whether the module should receive a mock implementation or not.

• require.requireMock(moduleName): returns a mock module instead of the actual module, bypassing all checks on whether the module should be required normally or not.

• test(name, fn): Also under the alias it(name, fn), all you need in a test file is that the test method which runs a test. For example, let’s say there’s a function inchesOfRain() that should be zero. Your whole test could be:

The first argument is the test name; the second argument is a function that contains the expectations to test

If a promise is returned from test, Jest will wait for the promise to resolve before letting the test complete.

Even though the call to test will return right away, the test doesn’t complete until the promise resolves as well.

• test.only(name, fn): similar to describe.only()

• test.skip(name, fn): similar to describe.skip()

# Webpack Code Splitting - Async

Currently a ‘function-like’ import() module loading syntax proposal is on the way into ECMAScript.

The ES6 Defines import() as method to load ES6 modules dynamically onruntime.

Webpack treats import() as a split-point and puts the requested module in a seperate chunk. import() takes the module name as argument and returns a Promise: import(name) => Promise

Note that the fully dynamic statement, such as import(foo) will fail because webpack requries at least some file location information. This is because foo could potentially be any path to any file in your system or project. The import() must contain at least some information about where the module is located. so bundling can be limited to a specific directory or set of files.

# Chunk Name

Since webpack 2.4.0, chunk names for dynamic imports can be specified using a “magic comment”

Since webpack 2.6.0, the placeholder [request], [index] are supported:

# import mode

Since webpack 2.6.0, different modes for resolving dynamic imports can be specified:

• lazy: The default behavior. Lazy generates a chunk per request. So everything is lazy loaded.

• lazy-once: Only available for imports with expression. Generate a single chunk for all possible requests. So the first request causes a network request for all modules, all following requests are already fulfilled.

• eager: Eager generates no chunk. All files are included in the current chunk. No network request is required to load the files. It still returns a Promise, but it’s already resolved.

You can combine both options ( webpackChunkName and webpackMode ), it’s parsed a JSON5 object without curly brackets:

# Usage with Babel

If you want to use import with Babel, you’ll need to install the syntax-dynamic-import plugin while it’s still Stage 3 to get around the parser error.

Diabled default import in es6.

Not using the syntax-dynamic-import plugin will fail the build with

or

# Usage with Babel and async / await

To use es7 async/await with import()

# import() imports the entire module namespace

Note that the promise is resolved with the module namespace. Consider the following two examples:

Component in both of the cases resolves to the same thing, meaning in the case of using import() with ES2015 moduels you have to explicitly access default and named exports:

# Mongodb Simple Guide

• yarn add mongoose

• import mongoose from 'mongoose'

• const db = mongoose.conenct(MONGODB_URI)

# Schema

Schema defines the skeleton of minimal unit

Primitive Types: String, Number, Boolean, null, Array, Document, Date

# Model

Instance of Schema, it has ability to operate db

• test1: Name of Collection in the DB

# Entity

Instance of Model, it has ability to operate db

# Find

if fields is omited, or null, the docs will return all attributes

# findOne

Same as find, except it will return only one matched

# findById

Same as findOne, but it only find by _id

Model.create

Entity.save

# Remove

• $lt: less than • $lte: less than or equal to

• $gt: greater than • $gte: greater than or equal to

• $ne: not equal to • $in: belong to

• $or: or • $exists: exist

• $all # Limit # Skip Skim first n docs if total docs less than 4, it will output nothing # Sort -1: descending, 1: ascending # ObjectId The default id _id could be any types in mongodb, and default to be ObjectId ObjectId, is a 12-types BSON String. • 4 byte: UNIX TimeStamp • 3 byte: Represents the OS where mongodb running on • 2 byte: Represents the process where this _id in on • 3 byte: Rnadom Number # Schema add Attribute # Schema add instance method # Schema add static method javascript import mongoose from ‘mongoose’ const db = mongoose.connect(MONGODB.URI) const TestSchema = new mongoose.Schema({ name: { type: String }, age: { type: Number }, }) TestSchema.static(‘findByName’, (name, cb) => { return this.find({name: name}, cb) }) const TestModel = db.model(‘test’, TestSchema) TestModel.findByName(‘tim’, (err, docs) => { // … }) # TypeScript With Node # Configuring TypeScript Compilation TypeScript uses the file tsconfig.json to adjust project compile options compilerOptions Description “module”: “commonjs” The output module type (in your .js files). Node uses commonjs “target”: “es6” The output language level. Node supports ES6 “noImplicityAny”: true Enables a stricter setting which throws errors when something has a default any value “moduleResolution”: “node” TypeScript attempts to mimic Node’s module resolution strategy “sourceMap”: true “ourDir”: “dist” Location to output .js files after compilation “baseUrl”: “.” Part of configuring module resolution “paths”: {…} Part of configuring module resolution The rest of the file define the TypeScript project context. The project context is basically a set of options that determine which files are compiled when the compiler is invoked with a specific tsconfig.json. include takes an array of glob pattern of files to include in the compilation. # Type Definition (.d.ts) Files TypeScript uses .d.ts files to provide types for JavaScript libraries that were not written in TypeScript. This is great because once you have a .d.ts file, TypeScript can type check that library and provide you better help in your editor. The TypeScript community actively shares all the most up-to-date .d.ts files for popular libraries on a Github repository called DefinitelyTyped. Because the "noImplicityAny": true, we are required to have a .d.ts file for every library used. You could set noImplicityAny to false to silence errors about missing .d.ts files. It’s a best practice to have a .d.ts file for every library(Even the .d.ts file is basically empty) # Installing .d.ts files from DefinitelyTyped For the most part, you’ll find d.ts files for the libraries you are using on DefinitelyTyped. These .d.ts files can be easily installed into your project by using npm scope @types. For example, if we want the .d.ts file for jQuery, we can do so with npm install --save-dev @types/jquery. Once .d.ts files have been installed using npm, you should see them in your node_modules/@types folder. The compiler will always look in this folder for .d.ts files when resolving JavaScript libraries. # What if a library isn’t on DefinitelyTyped? Setting up TypeScript to look for .d.ts files in another folder The Compiler knows to look in node_modules/@types by default, but to help the compiler find our own .d.ts files we have to configure path mapping in our tsconfig.json. Path mapping can get pretty confusing, but the basic idea is that the TypeScript compiler will look in specific places, in a specific order when resolving modules, and we have the ability to tell the compiler exactly how to do it. In the tsconfig.json for this project you’ll see the following: This tells the TypeScript compiler that in addition to looking in node_module/@types for every import (*) also look in our own .d.ts file location <baseUrl> + src/types/* First the compiler will look for a .d.ts file in node_modules/@types and then src/types # Summary of .d.ts management In general if you stick to the following steps you should have minimal .d.ts issues: • After installing any npm package as a dependency or dev dependency, immediately try to install the .d.ts file via @types • If the library has a .d.ts file on DefinitelyTyped, the install wil succeed and you are done, if the install fails because the package doesn’t exist, generate .d.ts by yourself # Source Map In the tsconfig.json With this option enabled, next to every .js file that the TypeScript compiler outputs there will be a .map.js file as well. This .map.js file provides the information necessary to map back to the source .ts file while debugging. # Using Debugger in VS Code When debugging in VS Code, it looks for a top level .vscode folder with a launch.json file. In this file, you can tell VS Code exactly what you want to do: This is mostly identical to the “Node.js: Launch Program” template with a couple minor changes: launch.json Options Description “program”: “${workspaceRoot}/dist/server.js” Modified to point to our entry point in dist
“smartStep”: true Won’t step into code that doesn’t have a source map
“outFiles”: […] Specify where output files a dropped. Use with source map
“protocal”: “inspector” Use the new Node Debug Protocal because we’re on the latest node

Dragging and dropping files from your desktop to a browser is one of the ultimate goals for web application integration, which consists of:

• enable file dragging and dropping onto a web page element

• analyze dropped files in JavaScript

• load and parse files on the client

• asynchronously upload files to the server using XMLHttpRequest2

• show a graphical progress bar while the upload occurs

• use progressive enhancement to ensure your file upload from works in any browser

# The File API

• FileList: represents an array of selected files

• File: represents an individual file

• FileReader: an interface which allows us to read file data on the client and use it within JavaScript

# JavaScript Events

• dragstart

• drag

• dragend

• dragenter

• dragover

• dragleave

• drop

## dataTransfer

• dropEffect: copy | move | link | none

• effectAllowed: copy | move | link | copyLink | copyMove | linkMove | none | all (default)

• files

• types

• setDragImage(imgElement, x, y): set custom icon along dragging

• setData(format, data)

• getData(format)

• clearData()

# Notice

By default, brower will refuse all drag actions(and file will be opened in browser if files from desktop dragged into the browser), so e.preventDefault() should be added in dropover and drop event

Dragging Text will automatically set e.dataTransfer.setData('text/plain', node.innerText)

Dragging File will add files to e.dataTransfer.files

# Simple Usage of flatMap

Original

Both map() and flatMap() take a function f as a parameter that controls how an input Array is translated to an output Array:

• With map(), each input Array element is translated to exactly one output element, aka, f returns a single value

• With flatMap(), each input Array element is translated to zero or more output elements, aka, f returns an Array of values.

An smiple implementation of flatMap:

flatMap is simpler if mapFunc is only allowed to return Arrays, but we don’t impose this restriction here, because non-Array values are occasionally useful.

# Mapping to multiple values

The Array method map() maps each input Array element to one output. But if we want to map it to multiple output elements?

That becomes necessary in the following example: The React component TagList is invoked with two attributes

The attributes are:

• An Array of tags, each tag being a string

• A callback for handling clicks on tags

TagList is rendered as a series of links seperated by commas:

Here each tag (except the first) provide two elements in the rendered Array

# Arbitrary Iterables

flatMap can be generalized to work with arbitrary iterables

flatMapIter function works with Arrays:

# Implementing flatMap via reduce

You can use the Array method reduce to implement a simple version of flatMap

# Related to flatMap: flatten

flatten is an operation that concatenates all the elements of an Array

It can be implemented as follows:

So the following expressions are equivalent

# New Babel Preset - Env

babel-preset-env is a new preset which let you specify an environment and automatically enables the necessary plugins.

At the moment, several presets let you determine what features Babel should support:

• babel-preset-es2015, babel-preset-es2016, etc: incrementally support various versions of ECMAScript. babel-preset-es2015 transpiles what’s new in ES6 to ES5, babel-preset-es2016 transpiles what’s new in ES7 to ES6.

• babel-preset-latest: supports all features that are either part of an ECMAScript version or at stage 4.

The problem with these presets is that they often do too much. For example, most modern browsers support ES6 generator. Yet if you use babel-preset-es2015, generator functions will always be transpiled to complex ES5 code.

babel-preset-env works like babel-preset-latest, but it lets you specify an environment and only transpiles features that are missing in that environment.

Note that you need to install and enable plugins and/or presets for experimental features(that are not part of babel-preset-latest)

On the plus side, you don’t need es2015 presets anymore.

# Browsers

For browsers you have the option to specify either:

• Browsers via browserslist query syntax

• Support the last two versions of browsers and IE 7+

• Support browsers that have more than 5% market share

• Fixed versions of browsers:

# Node.js

If you compile your code for Node.js on the fly via Babel, babel-preset-env is especially useful, because it react to the currently running version of Node.js if you set the target node to current

# Additional Options for babel-preset-env

## modules(string, default: ‘commonjs’)

This options lets you configure to which module format ES6 modules are transpiled:

• Transpile to popular module formats: ‘amd’, ‘commonjs’, ‘systemjs’, ‘umd’

• Don’t transpile: false

## include, exclude (Array of strings, default [])

• include: always enables certain plugins

• exclude: prevents certain plugins from being enabled

## useBuiltIns (boolean, default: false)

Babel comes with a polyfill for new functionality in standard library. babel-preset-env can optionally import only those parts of the polyfill that are needed on the specified platforms:

There are two ways of using the polyfill:

• core-js polyfills ES5, ES6+ as needed

• install polyfill: yarn add core-js

• activate polyfill: import 'core-js'

• babel-polyfill: polyfills core-js and regenerator runtime(to emulate generators on ES5)

• install polyfill: yarn add babel-polyfill

• activate polyfill: import 'babel-polyfill'

Either of the two import statements is transpiled to an environment-specific sequence of more fine-grained imports:

## debug (boolean, default: false)

Logs the following information via console.log()

• Targeted environments

• Enabled transformers

• Enabled plugins

• Enabled polyfills

# Babel-Polyfill or Babel-Runtime

The babel-polyfill and babel-runtime modules are sued to serve the same function in two different ways. Both modules ultimately serve to emulate an ES6 environment.

Both babel-polyfill and babel-runtime emulate an ES6 environment with two things:

• a slew of polyfills as provided by core-js

• complete generator runtime

babel-polyfill accomplishes this task by assigning methods on the global or on native type prototypes which means that once required, as far as the javascript runtime you’re using is concerned, ES6 methods and object simply exist. If you were to require babel-polyfill in a script run under node v0.1.0 – a runtime which does not natively support the Promise API – your script would then have access to the Promise object. As far as you are concerned, you’re suddenly using an environment that support the Promise object.

babel-runtime does something very similar, but in a way that does not pollute native object prototypes or the global namespace. Instead, babel-runtime is a module that you can list as a dependency of your application like any other module, which polyfills ES6 methods. In other words and continuing the example from above, while you may not have the Promise object available to you, you now have the same functionality available to you from require('babel-runtime/core-js/promise'). By itself, this is useful but inconvenient. Fortunately, babel-runtime is not intended to be used by itself. Rather, babel-runtime is intended to be paired with the transform – babel-plugin-transform-runtime – which will automatically rewrite your code such that you can write your code using the Promise API and it will be transformed to use the Promise-like object exported by babel-runtime

babel-polyfill offers you the conveniences of gloablly defined objects without having to transform your code further. However, as with anything that mutates a global, this can introduce collision between versions, etc.

babel-runtime, on the other hand, will not suffer from collision as everything is name-spcaed. Since the module will be defined in your package.json, it can be versioned like everything else. The tradeoff, however, is that a transform can only do so much. The runtime remaps methods according to a definitions map. Anecdotally, this has covered each of my use-cases but there may be an obscure method or two which is not remapped. There are also certain cases where your intent is ambiguous. In such cases, the transform won’t know exactly what to do.

# Conclusion

To summarize, with the general case for Babel 6, there are two main steps you’ll need to perform:

• Provide your code with an emulated ES6 environment by either requiring babel-polyfill or requiring the babel-runtime module plus the babel-plugin-transform-runtime transform:

# Deep in Runtime-Transform

This plugin is recommended in a library/tool

Note: Instance methods such as 'foobar'.includes('foo') will not work since that would require modification of existing built-ins(Use babel-polyfill for that)

Babel uses very small helpers for common functions such as _extend. By default this will be added to every file that requires it. This duplication is sometimes unnecessary, especially when your application is spread out over multiple files.

This is where the transform-runtime plugin comes in: all of the helpers will reference the module babel-runtime to avoid duplication across your compiled output. The runtime will be compiled into your build.

Another purpose of this transformer is to create a sandboxed environment for your code. If you use babel-polyfill and the built-ins it provides such as Promise, Set and Map, those will pollute the global scope. While this might be ok for an app or a command line tool, it becomes a problem if your code is a library which you intend to publish for other to use or if you can’t exactly control the environment in which your code will run.

The transformer will alias these built-ins to core-js so you can use them seamlessly without having to require the polyfill.

# Prod and Dev

In most cases, you should install babel-plugin-transform-runtime as a development dependency, and babel-runtime as a production dependency.

# Usage

I prefer to use .babelrc

There the options are referred.

• helpers: boolean, default to true

Toggles whether or not inlined babel helpers (classCallCheck, extends, etc) are replaced with calls to moduleName.

• polyfill: boolean, default to be true

Toggles whether or not new built-ins(Promise, Set, Map, etc) are transformed to use a non-global polluting polyfill.

• renegerator: boolean, default to true

Toggles whether or not generator functions are transforms to use a regenerator runtime that does not pollute the global scope.

• moduleName: string, default to babel-runtime

Set the name/path of the module used when importing helpers.

Example:

# Technical Details

The runtime transformer plugin does three things:

• Automatically requires babel-runtime/renegerator when you use generator/async functions;

• Automatically requires babel-runtime/core-js and maps ES6 static methods and built-ins;

• Removes the inline Babel helpers and uses the module babel-runtime/helpers instead.

You can use built-ins such as Promise, Set, Symbol, etc., as well as use all the Babel features taht require a polyfill seamlessly, without global pollution, making it extremely suitable for libraries.

## Regenerator aliasing

the following is generated

This isn’t ideal as then you have to include the regenerator runtime which pollutes the global scope.

Instead what the runtime transformer does it compile that to:

This means that you can use the regenerator runtime without polluting your current environment.

The same actions with Core-js Aliasing and Helper Aliasing.

# More Faster React Functional Component

Original

An basic Avatar Component:

And its functional component style is:

As you can see, it’s just a simple js function returning an element.

React still does a lot of stuff on functional components that, by nature, will never be used.

But we can skip React internals for these functional component.

They are just plain JavaScritp functions, which means we can call it in the render function.

As we know, the traditional usage:

will be compiled into

It will cost a lifecycle of a React Component.

But with direct calling of plain JavaScritp Function, all these consumption can be eliminated.

By the way, transform-react-inline-elements` does the same as a bebel transform, so there’s no need to change the source code.