Start in Drizzle

Drizzle is a collection of front-end libraries that make wirting dapp front-ends easier and more predicable. The core of Drizzle is based on Redux Store, so you can access to the spectacular development tools around Redux. We take care of synchronizing your contract data, transaction data and more. Things stay fast because you declare what to keep in sync.

  • Fully reactive contract data, including state, events and transactions

  • Declarative, so you’re not wasting valuable cycles on unneeded data.

  • Maintainer access to underlying functionaliy. Web3 and your contract’s methods are still there, untouched.

Installation

1
yarn add drizzle

If use React you can use drizzle-react and (optionally) its companion drizzle-react-components

Drizzle uses web3 1.0 and web sockets, be sure your development environment can support these.

  1. Import the provider
1
import { Drizzle, generateStore } from 'drizzle'
  1. Create an options object and pass in the desired contract artifacts for Drizzle to instantiate. Other options are available, just keep going on.
1
2
3
4
5
6
7
8
9
10
11
12
// import contract
import SimpleStorage from './../build/contracts/SimpleStorage.json'
import TutorialToken from './../build/contracts/TutorialToken.json'

const options = {
contracts: [
SimpleStorage
]
}

const drizzleStore = generateStore(this.props.options)
const drizzle = new Drizzle(this.props.options, drizzleStore)

Contract Interaction

Drizzle provides helpful methods on top of the default web3 Contract methods to keep you calls and transactions in sync with the store.

cacheCall()

Gets contract data. Calling the cacheCall() function on a contract will execute the desired call and return a corresponding key so the data can be retrieved from the store.

When a new block is received, Drizzle will refresh the store automatically _if_ any transactions in the block touched our contract.

Note: We have to check that Drizzle is initialized before fetching data. A simple if statement such as below is fine for display a few pieces of data, but a better approach for larger dapps is to use a loading component. Drizzle built one in drizzle-react-component as well.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// Assuming we're observing the store for changes
var state = drizzle.store.getState()

// If Drizzle is initialized (and therefore web3, accounts, and contracts ), continue
if (state.drizzleStatus.initialized) {
// Declare this call to be cached and synchornized. We'll receive the store key for recall
const dataKey = drizzle.contracts.SimpleStorage.methods.storedData.cacheCall()

// Use the dataKey to display data from the store
return state.contracts.SimpleStorage.methods.storedData[dataKey].value
}

// If Drizzle isn't initialized, display some loading indication
return 'loading'

The Contract instance has all of its standard web3 properties and methods. For example, you could still call as normal if you don’t want something in the store:

1
drizzle.contracts.SimpleStorage.methods.storedData().call() // different from methods.storedData.cacheCall()

cacheSend()

Sends a contract transaction. Calling the cacheSend() function on a contract will send the desired transaction and return a correnponding hash so the status can be retrieved from the store. The last argument can optionally be an options object with the typical from, gas, gasPrice keys. Drizzle will update the transaction’s state in the store(pending, success, error) and store the transaction receipt.

Note: We have to check that Drizzle is initialized before fetching data. A simple if statement such as below is fine for display a few pieces of data, but a better approach for larger dapps is to use a loading component.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// Assuming we're observing the store of changes
var state = drizzle.store.getState()

// if Drizzle is initialized ( and therefore web3, accounts, and contracts ), continue
if (state.drizzleStatus.initialized) {
// Declare this transaction to be observed. We'll receive the stackId of reference.
const stackId = drizzle.contracts.SimpleStorage.methods.set.cacacheSend(2, {from: '0x3f...'})

// Use the dataKey to display the transaction status
if (state.transactionStack[stackId]) {
const txHash = state.trasnactionStack[stackId]

return state.transactions[txHash].status
}
}

// If Drizzle isn't initialized, display some loading indication.
return 'loading'

The contract instance has all of its standard web3 properties and methods.

1
drizzle.contracts.SimpleStorage.methods.set(2).send({from: '0x3f...'})

Options

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
contracts,
events: {
contractsName: [
eventName
]
},
web3: {
fallback: {
type
url
}
}
}
  • Contracts: Array, Required, an array of contract artifacts files

  • Events: Object, an object consisting of contract names each containing an array of strings of the event names we’d like to listen for and sync with the store

  • Web3: Object, options regarding web3 instantiation

  • Fallback: Object, an object consisting of the type and url of a fallback web3 provider. This is used if no injected provider, such as MetaMask or Mist, is detect.

    • type: string, the type of web3 fallback, currently ws is the only possibility

    • url: string, the full websocket url. For example, ws://127.0.0.1:8546

How data stays fresh

  1. Once initialized, Drizzle instantiates web3 and our desired contracts, then observes the chain by subscribing to new block headers

  1. Drizzle keeps track of contract calls so it knows what to synchronize

  1. When a new block header comes in, Drizzle checks that the block isn’t pending, then goes through the transactions looking to see if any of them touched our contracts

  1. If they did, we replay the calls already in the store to refresh any potentially altered data. If they didn’t we continue with the store data.

Constant, View, and Pure in Solidity

Summary

  • pure for functions: Disallows modifition or access of state - this is not enforeced yet.

  • view for functions: Disallow modifition of state - this is not enforced yet.

  • payable for functions: Allows them to receive Ether together with a call.

  • constant for state variables: Disallow assignment (except initialization), does not occupy storage slot.

  • anonymouse for events: Does not store event signature as topic(indexable).

  • indexed for event parameters: Stores the parameter as topic(indexable).

Question

Q: Solidity 0.4.16 introduced the view and constant function modifiers. The documentation says:

constant for functions: Same as view

Does this mean view is just an alias for constant?

Answer: This is discussed here

  1. The keyword view is introduced for functions (it replaces constant). Calling a view cannot alter the behavior of future interaction with any contract. This means such functions cannot use SSTORE, cannot send or receive ether and can only call other view or pure functions.

  2. The keyword pure is introduced for functions, they are view functions with the additional restriction that their value only depends on the function arguments(pure function). This means they cannot use SSTORE, SLOAD, cannot send or receive ether, cannot use msg or block and can only call other pure function.

  3. The keyword constant is invalid on functions

  4. The keyword constant on any variable means it cannot be modified (and could be placed into memory or bytecode by the optimiser)

Writing Upgradable Contracts in Solidity

Original

Ethereum contracts are immutable – once deployed to the blockchain they cannot be updated, yet the need to change their logic with time is ultimately necessary.

During a contract upgrade the following factors need to be considered:

  • Block gas limit(4712388 for Homestead)

    Upgrade transaction tend to be large due to the amount of processing they have to complete e.g. deploy a contract, move data, move references.

  • Inter-contract dependencies

    when a contract is compiled, all of its imports are compiled into the contract thus leading to a ripple effect when you want to swap out a contract which is being referenced by other contract.

These two are related, as having more dependencies affects the size of your deployed contracts and the overall transaction size of the upgrade. The implementation patterns below work to minimise the upgrade gas costs as well as loosening the coupling of contracts without breaking Solidity type safety.

Note that for the sake of simplying the examples, we have omitted the implementation of security and permission.

Avoiding large data copy operations

Storing data is expensive(SSTORE operation costs 5000 or 20000 gas) and upgrading contracts containing large storage variables runs the chance of hitting the transaction gas limit during the copying of its data.

You may therefore want to isolate your datastore from the rest of your code, and make it as flexible as possible, so that it is unlikely to need to be upgrade.

Depending on your circumstances, how large of a datastore you need and whether you expect its structure to change often, you may choose a strict definition or a loosely typed flat store. Below is an example of the latter which implements support for storing a sha3 key and value pairs. It is the more flexible and extensible option. This ensures data schema changes can be implemented without requiring upgrades to the storage contract.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
contract EternalStorage {
mapping(bytes32 => uint) UIntStorage;

function getUIntValue(bytes32 record) constant returns (uint) {
return UIntStorage[record];
}

function setUIntValue(bytes32 record, uint value) {
UIntStorage[record] = value;
}

mapping(bytes32 => string) StringStorage;

function getStringValue(bytes32 record) constant returns (string) {
return StringStorage(record);
}

function setStringValue(bytes32 record, string value) {
StringStorage[record] = value;
}

mapping(bytes32 => address) AddressStorage;

function getAddressValue(bytes32 record) constant returns (address) {
return AddressStorage[record];
}

function setAddressValue(bytes32 record, address value) {
AddressStorage[record] = value;
}

mapping(bytes32 => bytes) BytesStorage;

function getBytesValue(bytes32 record) constant returns (bytes) {
return BytesStorage[record];
}

function setBytesValue(bytes32 record, bytes value) {
BytesStorage[record] = value;
}

mapping (bytes32 => bool) BooleanStorage;

function getBooleanValue(bytes32 record) constant returns (bool) {
return BooleanStorage[record];
}

function setBooleanValue(bytes32 record, bool value) {
BooleanStorage[record] = value;
}

mapping (bytes32 => int) IntStorage;

function getIntValue(bytes32 record) constant returns (int) {
return IntStorage[record];
}

function setIntValue(bytes32 record, int value) {
IntStorage[record] = value;
}
}

For upgrades you can then just switch the upgraded contract to point to the new EternalStorage contract instance without having a copy of its data.

Use Libraries to Encapsulate Logic

Libraries are special form of contracts that are singletons and not allowed any storage variables.

The advantage of libraries in the context of upgrades is that they allow encapsulation of business logic or data management logic into singleton instance that cost only upgrading one and not many contracts.

Example below shows a library used for adding a Proposal to storage.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import "EternalStorage.sol";

library ProposalsLibrary {
function getProposalCount(address _storageContract) constant returns (uint256) {
return EternalStorage(_storageContract).getUIntValue(sha3("proposalCount"));
}

function addProposal(address _storageContract, bytes32 _name) {
var idx = getProposalCount(_storageContract);
EternalStorage(_storageContract).setBytes32Value(sha3("proposal_name", idx), _name);
EternalStorage(_storageContract).setUIntValue(sha3("proposal_eth", idx), 0);
EternalStorage(_storageContract).setUIntValue(sha3("ProposalCount"), idx + 1);
}
}

Under the cover, library functions are called using delegatecall from the calling contract which has the advantage of passing the msg.sender and msg.value seamlessly. You can therefore write your library code as if it were just part of your contract, without having to worry about the sender or value chagning.

The example below shows a sample Organization contract using ProposalsLibrary to interact with data storage.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import "ProposalsLibrary.sol";

contract Organization {
using ProposalsLibrary for address;
address public eternalStorage;

function Organization(address _eternalStorage) {
eternalStorage = _eternalStorage;
}

function addProposal(bytes32 _name) {
eternalStorage.addProposal(_name);
}
}

With libraries, there is a slight gas overhead on each call. However, it makes deploying a new contract much cheaper.

Use ‘interface’ to decouple inter-contract communication

Abstract Contract implementation behind an interface that only define its function siguatures.

This is a well known pattern in object oriented programming.

1
2
3
4
5
6
7
8
9
10
11
12
13
import "ITokenLedger.sol";

contract Organization {
ITokenLedger public tokenLedger;

function Organization(address _tokenLedger) {
tokenLedger = ITokenLedger(_tokenLedger);
}

function generateToekns(uint256 _amount) {
tokenLedger.generateTokens(_amount);
}
}

Here instead of importing the entire TokenLedger.sol contract, we use an interface containing just the function signatures. This eases any possible upgrades to TokenLedger which don’t affect its interface, too, which with this model can be implemented without redeploying the Organization contract.

Dockerizing a React App

Original

Project Setup

Install create-react-app

1
npm install -g create-react-app@1.5.2

Creating a new app

1
2
create-react-app docker-app
cd docker-app

Docker

Add a Dockerfile to the project root

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# base image
FROM node:9.6.1

# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app

# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH

# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts@1.1.1 -g

# start app
CMP ["npm", "start"]

Add a .dockerignore to speed up the Docker build process as our local dependencies will not be sent to the Docker Daemon

1
node_modules

Build and tag the Docker Image

1
docker build -t docker-app:1.0.0

Then spin up the container once the image is built.

1
docker run -it -v ${PWD}:/usr/src/app -v /usr/src/app/node_modules -p 3000:3000 --rm docker-app:1.0.0

Now you can visit your app http://localhost:3000

Docker Compose

Add a docker-compose.yml to the project root.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
version: '3.3'

services:
docker-app:
container_name: docker-app
build:
context: .
dockerfile: Dockerfile
volumnes:
- '.:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- '3000:3000'
environment:
- NODE_ENV=development

Take note of the volumes. Without the data volume /usr/src/app/node_modules, the node_modules directory would be overwritten by the mounting of the host directory at runtime that were installed when the container was built.

BUild the image and fire up the container

1
docker-compose up -d --build

Ensure the app is running in the browser.

Bring down the container before moving on

1
docker-compose stop

Production

Let’s create a seperate Dockerfile for use in production called Dockerfile-prod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# build environment
FROM node:9.6.1 as builder
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_moudles/.bin:$PATH
COPY package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts@1.1.1 -g
COPY . /usr/src/app
RUN npm run build

# production environment
FROM nginx:1.13.9-alpine
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx","-g",'deamon off;"]

Using the production Dockerfile, build and tag the Docker Image:

1
docker build -f Dockerfile-prod -t docker-app-prod .

Spin up the container

1
docker run -it -p 80:80 --rm docker-app-prod

Add Prod Docker Compose as `docker-compose-prod.yml’

1
2
3
4
5
6
7
8
9
10
version: "3.3"

services:
docker-app-prod:
container_name: docker-app-prod
build:
context: .
dockerfile: Dockerfile-prod
prots:
- "80:80"

Fire up the container

1
docker-compose -f docker-compose-prod.yml up -d --build

Deploy RoR With Mina

Mina Setup

Let’s take a look at setting up Mina with Puma. First, you’ll need to add Mina and mina-puma in Gemfile.

Then install gems and execute the initial Mina Command for generating a config/deploy.rb.

1
2
bundle
mina init

Detailed Explanations for the Mina deploy file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# Set the domain or ip address of the remote server.
set :domain, 'yourdomain'

# Set the folder of the remote server where Mina will deploy your app.
set :deploy_to, 'path/to/directory'

# Set a link to the repository. Better git protocol.
set :repository, 'git@...'

# Set the name of a branch you plan to deploy as default master.
set :branch, 'master'

# Fill in the names of the files and directories that will be symlinks to the shared directory.
# All folders will be created automatically on Mina Setup.
# Don't forget to add a path to the uploads folder if you are using Dragonfly or Carrierwaves.
# Otherwise, you will lose your uploads on each deploy.
set :shared_dirs, fetch(:shared_dirs, []).push('log', 'tmp/pids', 'tmp/sockets', 'public/uploads')
set :shared_files, fetch(:shared_files, []).push('config/database.yml', 'config/secrets.yml', 'config/puma.rb')

# Username of ssh for access to the remote server.
set :user, 'root'

# This is not a required field, you can use it to set an app name for easy recognition.
set :application_name, 'MyApp'

# Set ruby version. If you have RVM installed globally, you'll also need to set an RVM path,
# like set:rvm_use_path, '/usr/local/rvm/scripts/rvm'.
# You can find the RVM location with rvm info command.
task :environment do
invoke :'rvm:use', 'ruby-2.5.1@default'
end

By default, Mina will create all folders mentioned in shared_dirs and shared_files.

You deploy section in deploy.rb should look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
task :deploy do
deploy do
comment "Deploying #{fetch(:application_name)} to #{fetch(:domain)}:#{fetch(:deploy_to)}"
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
invoke :'rvm:load_env_vars'
invoke :'bundle:install'
invoke :'rails:db_migrate'
command %{#{fetch(:rails) db:seed}}
invoke :'rails:assets_precompile'
invoke :'deploy:cleanup'

on :launch do
invoke :'puma:phased_restart'
end
end
end

Puma Setup

Create or fill a puma.rb file in a config folder

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
environment "productino"

bind "unix:///{path_to_your_app}/shared/tmp/sockets/puma.sock"
pidfile "/{path_to_your_app}/shared/tmp/pids/puma.pid"
state_path "/{path_to_your_app}/shared/tmp/sockets/puma.state"
directory "/{path_to_your_app}/current"

workers 2
threads 1,2

daemonize true

activate_control_app 'unix:///{path_to_your_app}/shared/tmp/sockets/pumactl.sock'

prune_bundler

Fill database.yml and secrets.yml

Setup nginx

Create file myapp.conf in a /nginx/etc/conf.d folder with similar content.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
upstream mysite {
server unix:///home/admin/mysite/shared/tmp/sockets/puma.sock fail_timeout=0;
}

server {
listen 80;
listen [::]:80;

server_name mysite.com;
root /home/admin/mysite/current/public;

location ~ ^/assets/ {
expires max;
gzip_static on;
gzip_vary on;
add_header Cache-Control public;
break;
}


location / {
proxy_pass http://mysite;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

location ~ ^/(500|404|422).html {
root /home/admin/mysite/current/public;
}

error_page 500 502 503 504 /500.html;
error_page 404 /404.html;
error_page 422 /422.html;

client_max_body_size 4G;
keepalive_timeout 10;
}

What Blockchain Came With

Ethereum + IPFS

Gas Used by Public and External Function in Solidity

Original

A simple example demostrating this effect looks like this:

1
2
3
4
5
6
7
8
9
10
11
pragma solidity ^0.4.19;

contract Test {
function test(uint[20] a) public returns (uint) {
return a[10] * 2;
}

function test2(uint[20] a) external returnss (uint) {
return a[10] * 2;
}
}

Calling each function, the public function uses 496 gas, while the external function uses 261 gas.

The difference is because in public functions, Solidity immediately copies array argument to memory, while external functions can read directly from calldata. Memory allocation is expensive, whereas reading from calldata is cheap.

The reason that public functions need to write all the arguments to memory is that public functions may be called internally, which is actually an entirely different process than external calls.

Internal calls are executed via jumps in the code, and array arguments are passed internally by pointers to memory. Thus, when the compiler generates the code for an internal function, that function expects its arguments to be located in memory.

For external functions, the compiler doesn’t need to allow internal calls, and so it allows arguments to be read directly from calldata, saving the copying step.

As for best practices, you should use external fi you expect that the function will only ever be called externally, and use public if you need to call the function internally. It almost never makes sense to use the this.f() pattern, as this requires a real CALL to be executed, which is expensive. Also, passing arrays via this method would be far more expensive than passing them internally.

智能合约编写注意事项

原文连接

Overflow 与 Underflow

Solidity 可以处理 256 位数字, 最高为 2256 - 1, 所以对 (2 256 - 1) 加 1 会导致归 0.

同理, 对 unsigned 类型 0 做减 1 运算会得到 (2**256 - 1)

测试代码如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
pragma solidity 0.4.18;

contract OverflowUnderflow {
uint public zero = 0;
uint public max = 2**256 - 1;

// zero will end up at 2 ** 256 - 1
function underflow() public {
zero -= 1;
}

function overflow() public {
max += 1;
}
}

尽管他们同样危险, 但是在智能合约中, underflow 造成的影响更大.

比如, 账号 A 持有 X tokens, 如果他发起一笔 X + 1 tokens 的交易, 如果代码不进行校验, 则账号 A 的余额可能发生 underflow 导致余额变多.

可以引入 SafeMath Library 解决

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
pragma solidity 0.4.18;

library SafeMath {
function mul(uint256 a, uint256 b) internal pure returns (uint256) {
if (a==0) {
return 0;
}
uint c = a * b;
assert(c / a == b);
return c;
}

function div(uint256 a, uint256 b) internal pure returns (uint256) {
uint256 c = a / b;
return c;
}

function sub(uint256 a, uint256 b) internal pure returns (uint256) {
assert(b <= a);
return a - b;
}

function add(uint256 a, uint256 b) internal pure returns (uint256) {
uint256 c = a + b;
assert(c >= a);
return c;
}
}

contract OverflowUnderflow {
using SafeMath for uint;
uint public zero = 0;
uint public max = 2 ** 256 - 1;

function underflow() public {
zero = zero.sub(1);
}

function overflow() public {
max = max.add(1);
}
}

Visibility 与 Delegatecall

  • Public functions 可以被任意地址调用

  • External functions 只能从合约外部调用

  • Private functions 只能从合约内部调用

  • Internal functions 允许从合约及其子合约调用

External functions 消耗的 gas 比 public 少, 因为其使用 calldataPublic 需要复制所有参数到 memory.

Delegatecall

引自 Solidity Docs

Delegatecall is identical to a message call apart from the fact that the code at the target address is executed in the context of the calling contract and msg.sender and msg.value do not change their values.

This means that a contract can dynamically load code from a different address at runtime. Storage, current address and balance still refer to the calling contract, only the code is taken from the called address.

这个特性可以用于构建 Library 和模块化代码. 但是与此同时, 这也有可能造成别人对你的代码进行操作.

下例中, 攻击者调用 pwn 方法获得了合约的拥有权.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
pragma solidity 0.4.18;

contract Delegate {
address public owner;

function Delegate(address _owner) public {
owner = _owner;
}

function pwn() public {
owner = msg.sender;
}
}

contract Deletagion {
address public owner;
Delegate delegate;

function Delegation(address _delegateAddreses) public {
delegate = Delegate(_delegateAddreses);
owner = msg.sender;
}

// an attacker can call Delegate.pwn() in the context of Delegation, this means that pwn() will modify the state of **Delegation** and not Delegate, the result is that the attacker takes unauthorized ownership of the contract.
function () public {
if(delegate.delegatecall(msg.data)) {
this;
}
}
}

Reentrancy(TheDAO hack)

Solidity 中 call 函数被调用时, 如果带有 value 参数, 则会转发所有他所收到的 gas.

在一下代码片段中, call 函数在 sender 的余额实际减少前被调用. 这里有一个漏洞曾导致 TheDAO 攻击.

1
2
3
4
5
6
7
8
function withdraw(uint _amount) public {
if(balances[msg.sender] >= _amount) {
if(msg.sender.call.value(_amount)()) {
_amount;
}
balances[msg.sender] -= amount;
}
}

引自 Reddit 的解释

In simple words, it’s like the bank teller doesn’t change your balance until she has given you all the money you requested. “Can I withdraw $500? Wait, before that , can I withdraw $500?”

And so on. The smart contracts as designed only check you have $500 at beginning once, and allow themselves to be interrupted.

Start in Protobuf

Protocol Buffers are a language-neutral, platform-neutral, extensible way of serializing structured data for use in commnucations protocols, data storage, and more, originally designed at Google.

Protobuf.js is a pure JS implementation with TypeScript support for node.js and browser. It’s easy to use, blazingly fast and works out of the box with .proto files.

Installation

Node.js

1
npm install protobufjs
1
var protobuf = require('protobufjs')

Browsers

1
<script src="../protobuf.js"></script>

Distributions

Where bundle size is a factor, there are additional stripped-down versions of the full-library(~19kb gzipped) available that excldue certain functinality

  • when working with JSON descriptor, and/or reflection only, see the light-library(~16kb gzipped) that excludes the parser. CommonJS entry point is:
1
var protobuf = require('protobufjs/light')

When working with statically generated code only, see the minimal library(~6.5kb gzipped) that also excludes reflection. CommonJS entry point is:

1
var protobuf = require('protobufjs/minimal`)

Usage

Because JS is a dynamically typed language, protobuf.js introduces the concept of a valid message in order to provide the best possible performance.

Valid Message

A valid message is an object not missing any required fields and exclusively composed of JS types understood by the wire format writer.

There are two possible types of valid messages and the encoder is able to work with both these for convenience:

  • Message Instance (explicit instance of message classes with default values on their prototype) always (have to) satisfy the requirements of a valid message by design.

  • Plain JavaScript Objects that just so happen to be composed in a way satisfying the requirements of a valid message as well.

In a nutshell, the wire format writer understoods the following types:

Field Type Expected JS type(create, encode) Conversion (fromObject)
s-/u-/int32
s-/fixed32
number (32 bit integer) `value 0if signed<br>value >>> 0` is unsigned
s-/u-/int64
s-/fixed64
Long-like(optimal)
number(53 bit integer)
Long.fromValue(value) with long.js
parseInt(value, 10) otherwise
float
double
number Number(value)
bool boolean Boolean(value)
string string String(value)
bytes Uint8Array(optimal)
Buffer(optimal under node)
Array.<number>(8 bit integers)
base64.decode(value) if a string
Object with non-zero.length is assumed to be buffer-like
enum number (32 bit integer) Looks up the numeric id if a string
message Valid Message Message.fromObject(value)
  • Explicit undefined and null are considered as not set if the field is optional

  • Repeated fields are Array.<T>

  • Map fields are Object.<string, T> with the key being the string representation of the respective value or an 8 character long binary hash string for Long-likes.

  • Types marked as optimal provide the best performance because no conversion step(i.e. number to low and high bits or base64 string to buffer) is required.

For performance reasons, each message class provides a distinct set of methods with each method doing just one thing. This avoid unnecessary assertions / redundant operations where performance is a concern but also forces a user to perform verification explicityly where necessary.

Methods

  • Message.verify(message: Object): null | string

  • Message.encode(message: Message | Object[, writer: Writer]): Writer

  • Message.encodeDelimited(message: Message | Object[, writer: Writer]): Writer

  • Message.decode(reader: Reader | Uint8Array): Message

  • Message.toObject(message: Message[, options: ConversionOptions]): Object

  • Message.verify(message: Object): null | string

    Verifies that a plain js object satisfies the requirements of a valid message and thus can be encoded without issues. Instead of throwing, it returns the error message as a string, if any:

    1
    2
    3
    var payload = 'invalid (not an object)'
    var err = AwesomeMessage.verify(payload)
    if (err) throw Error(err)
  • Message.encode(message: Message | Object[, writer: Writer]): Writer

Encodes a message instance or valid plain javascript object. This method does not implicit verify the message and it’s up to user to make sure that the payload is a valid message.

1
var buffer = AwesomeBuffer.encode(message).finish()
  • Message.encodeDelimited(message: Message | Object[, writer: Writer]): Writer

Works like Message.encode but additionally prepends the length of the message as a varint.

  • Message.decode(reader: Reader): Message

Decode a buffer to a message instance. If required fields are missing, it throws a util.ProtocolError with an instance property set to the so far decoded message. If the wire format is invalid, it throws an Error.

1
2
3
4
5
6
7
8
9
try {
var decodedMessage = AwesomeMessage.decode(buffer)
} cache (e) {
if (e instanceof protobuf.util.ProtocolError) {
// e.instance holds the so far decoded message with missing required fields
} else {
// wire format is invalid
}
}
  • Message.decodeDelimited(reader: Reader | Uint8Array): Message

works like Message.decode but additionally reads the length of the message prepended as a varint

  • Message.create(properties: Object): Message

Creates a new message instance from a set of properties that satisfy the requirements of a valid message. Where applicable, it is recommended to prefer Message.create over Message.fromObject because it doesn’t perform possibly redundant conversion.

1
var message = AwesomeMessage.create({ awesomeField: 'AwesomeString' })
  • Message.fromObject(object: Object): Message

Converts any non-valid plain javascript object to a message instance using the conversion steps outlined within the table above.

  • Message.toObject(message: Message[, options: ConversionOptions]): Object

Converts a message instance to an arbitrary plain javascript object for interoperability with other libraries or storage. The resulting plain javascript object might still satisify the requirements of a vlaid message depending on the actual conversion options specified, but most of the time it does not.

1
2
3
4
5
6
7
8
9
var object = AwesomeMessage.toObject(message, {
enums: String,
longs: String,
bytes: String,
defaults: true,
arrays: true,
objects: true,
oneofs: true,
})

For reference, the following diagram aims to display relationships between the different methods and the concepts of a valid message:

In other words: verify indicates that calling create or encode directly on the plain object will [result in a valid message respectively] succeed. fromObject, on the other hand, does conversion from a broader range of plain objects to create valid message.

Examples

Using .proto files

It is possible to load existing .proto files using the full library, which parses and compiles the definitions to ready to use (reflection-based) message classes:

1
2
3
4
5
6
package awesomepackage
syntax = "proto3"

message AwesomeMessage {
string awesome_field = 1; becomes awesomeField
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
protobuf.load('awesome.proto', function(err, root) {
if (err) {
throw err
}
// Obtain a message type
var AwesomeMessage = root.lookupType('awesomepackage.AwesomeMessage')

// Exemplary payload
var payload = { awesomeField: 'AwesomeField' }

// Verify the payload if necessary(i.e. when possible incomplete or invalid)
var errMsg = AwesomeMessage.verify(payload)
if (errMsg) throw Error(errMsg)

// Create a new message
var message = AwesomeMessage.create(payload) // or use .fromObject if conversion is necessary

// Encode a message to an Uint8Array(browser) or Buffer (node)
var buffer = AwesomeMessage.encode(message).finish()

// ... do something with message

// Decode an Uint8Array(browser) or Buffer(node) to a message
var message = AwesomeMessage.decode(buffer)

// ...do something with message

// If the application uses length-delimited buffers, there is also encodeDelimited and decodeDelimited.

// Maybe conver the message back to a plain object

var object = AwesomeMessage.toObject(message, {
loings: String,
enums: String,
bytes: String,
// see Conversions
})
})

Additionally, promise syntax can be used by omitting the callback, if preferred:

1
protobuf.load('awesome.proto').then(root => {})

Using JSON Descriptors

The library utilize JSON descriptor that are equivalent to a .proto definition. For example, the following is identical to the .proto definition seen above:

1
2
3
4
5
6
7
8
9
10
11
12
13
// awesome.json
{
"nested": {
"AwesomeMessage": {
"fields": {
"awesomeField: {
"type": "string",
"id", 1
}
}
}
}
}

JSON descriptor closely resemble the internal reflection structure:

Exclusively using JSON descriptor instead of .proto files enables the use of just the light library(the parser isn’t requried in this case)

A JSON descriptor can either be loaded the usual way:

1
2
3
protobuf.load('awesome.json', function(err, root) {
if (err) throw Error(err)
})

Or it can be loaded inline:

1
2
3
4
var jsonDescriptor = require('./awesome.json') // exemplary for node
var root = protobuf.Root.fromJSON(jsonDescriptor)

// Continue at 'Obtain a message type' above

More Info here

Usage with TypeScript

The library ships with its own type definitions and modern editors will automatically detect and use them for code completion.

The npm package depends on @types/node because of Buffer and @types/long because of Long. If you are not building for node and/or using long.js, it should be safe to exclude them manually.

Using the JS API

The API shown above works pretty much the same with TypeScript. However, because everything is typed, accessing fields on instances of dynamically generated message classes requries either using bracket-notation(i.e. message[‘awesomeField’]), or explicit casts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import { load } from 'protobufjs'

load('awesome.proto', function(err, root) {
if (err) throw Error(err)

const AwesomeMessage = root.lookupType('awesomepackage.AwesomeMessage')
let message = AwesomeMessage.create({ awesomeField: 'hello' })
cnosole.log(`message = ${JSON.stringify(message)}`)

let buffer = AwesomeMessage.encode(message).finish()
console.log(`Buffer = ${Array.prototype.toString.call(buffer)}`)

let decoded = AwesomeMessage.decode(buffer)
console.log(`decoded = ${JSON.stringify(decoded)}`)
})

Get the Most Out of the CommonsChunkPlugin

Original

Use webpack-bundle-analyzer to generate a fancy colorful image of all of your
bundles.

Case 1: Many vendor bundles with duplicate code

Each single-page app is using a new CommonsChunkPlugin that targets just that
entry point, and its vendor code. This creates a bundle with only modules that
come from node_modules folder, and another bundle with just application code.

The configuration portion was provided:

1
2
3
4
5
6
7
8
Object.keys(activeApps).map(
app =>
new webpack.optimize.CommonsChunkPlugin({
name: `${app}_vendor`,
chunk: [app],
minChunks: isVendor,
}),
)

The activeApps variable most likely represents each of the individual entry
points.

Below are a few areas that could use some improvement.

“Meta” caching

What we see above is many large libraries like momentjs, lodash, and jquery
being used across 6 or more vendor bundles. The strategy for add all vendor into
a seperate bundle is good, but we should also apply the same strategy acroll
all vendor bundle.

1
2
3
4
new webpack.optimizeCommonsChunkPlugin({
children: true,
minChunks: 6,
})

We are telling the webpack to follow:

Hey webpack, look across all chunks(including the vendor ones that were
generated) and move any modules that occur in at least 6 chunks to a seperate
file.

Case 2: duplicate vendors across async chunks

As you can see, the same 2-3 components are used across all 40-50 async bundles

CommonsChunkPlugin can solve this.

Create an async Commons Chunk

The technique will be very simillar to the first, however we will need to set
the async property in the configuration option, to true as seen below:

1
2
3
4
5
new webpack.optimize.CommonsChunkPlugin({
async: true,
children: true,
filename: 'commonlazy.js',
})

In the same way – webpack will scan all chunks and look for common modules.
Since async: true, only code split bundles will be scanned.

Because we did not specify minChunks, the value default to 3. so what webpack
is being told is:

Hey webpack, look through all normal [aka lazy loaded] chunks and if you find
the smae module that appears across 3 or more chunks, then seperate it out
into a seperate async commons chunk.

More Control

minChunks function

There are scenario you don’t want to have a single shared bundle because not
every lazy/entry chunk may use it. The minChunks property also takes a
function. This can be your ‘filtering predicate’ for what modules are added to
your newly created bundle.

1
2
3
4
5
6
7
8
new webpack.optimize.CommonsChunkPlugin({
filename: 'loash-moment-shared-bundle.js',
minChunks: function(module, count) {
return (
module.resource && /lodash|moment/.test(module.resource) && count >= 3
)
},
})

This example says:

Yo webpack, when you come across a module whos absolute path amtches lodash or
moment, and occurs across 3 seperate entries/chunks, then extract those
modules into a seperate bundle.

Go Web

Original

Introduction

Go is a battery included programming language and has a webserver already built
in.

The net/http package from the standard library contains all functionalities
about the HTTP protocol.

This includes an HTTP client and an HTTP server.

Registering a Request Handler

First, create a Handler which receives all incoming HTTP connections from
browsers, HTTP client or API requests. A handler in Go is a function with this
signature.

1
func (w http.ResponseWriter, r *http.Request)

The function receives two parameters:

  • An http.ResponseWriter which is where you write your text/html response to.

  • An http.Request which contains all information about this HTTP request
    including like URL or header fields

Registering a request handler to the default HTTP Server is as simple as this:

1
2
3
http.HandleFunc("/", func (w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, you've requested: %s\n", r.URL.Path)
})

Listen for HTTP Connections

The request handler alone can not accept any HTTP connections from the outside.

An HTTP server has to listen on a port to pass connections on to the request
handler. Because port 80 is in most case the default port for HTTP traffic, this
server will also listen on it.

The following code will start Go’s default HTTP server and listen for
connections on port 80. You can navigate your browser to http://localhost/ and
see your server handling yoru request

1
http.ListenAndServe(":80", nil)

The Code

1
2
3
4
5
6
7
8
9
10
11
12
package main

import (
"fmt"
"net/http"
)
func main () {
http.HandleFunc("/", func (w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, you've requested: %s\n", r.URL.Path)
})
http.ListenAndServe(":80", nil)
}

Routing (using gorilla/mux)

Go’s net/http package provides a lot of functionalities for the HTTP protocol.
One thing it doesn’t do very well is complex request routing like segmenting a
request url into single parameters.

Use gorilla/mux package to create routes with named parameters, GET/POST
handlers and domain restrictions.

gorilla/mux is a package which adapts to Go’s default HTTP router. It comes
with a lot of features to increase the productivity when writing web
applications. It is also compliant to Go’s default request handler signature
func (w http.ResponseWriter, r *http.Request), so package can be mixed and
matched with other HTTP libraries like middleware or existing applications. Use
the go get command to install the package from Github.

1
go get -u github.com/gorilla/mux

Creating a new Router

First create a new request router. The router is the main router for your web
application and will later be passed as parameter to the server. It will receive
all HTTP connections and pass it on to the request handlers you will register on
it. You can create a new router like so:

1
r := mux.NewRouter()

Registering a Request Handler

Once you have a new router you can register request handlers like usual. The
only difference is that instead of calling http.HandleFunc(...), you call
HandleFunc on your router like this r.HandleFunc(...)

URL Parameters

The biggest strength of the gorilla/mux Router is the ability to extract
segments from the request URL. As an example, this is a URL in your application

1
/books/go-programming-bluepring/page/10
1
2
3
4
r.HandleFunc("/books/{title}/page/{page}", func (w http.ResponseWriter, r *http.Request) {
// get the book
// navigate to the page
})

The last thing to get the data from these segments. The package comes with the
function mux.Vars(r) which takes the http.Request as parameter and returns a
map of the segments.

1
2
3
4
5
func (w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
vars["title"]
vars["page"]
}

Setting the HTTP server’s router

Ever wondered what the nil in http.ListenAndServe(":80", nil) ment? It is
the parameter for the main router of the HTTP server. By default it’s nil,
which means to use the default router of the net/http package. To make use of
your own router, replace the nil with the variable of your router r.

1
http.ListenAndServe(":80", r)

The Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
package main
import (
"fmt"
"net/http"
"github.com/gorilla/mux"
)

func main () {
r := mux.NewRouter()
r.HandleFunc("/books/{title}/page/{page}", func (w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
title := vars["title"]
page := vars["page"]

fmt.Fprintf(w, "You've requested the book %s on page %s\n", title, page)
})
http.ListenAndServe(":80", r)
}

Features of the gorilla/mux Router

Methods

Restrict the request handler to specific HTTP methods

1
2
3
4
r.HandleFunc("/books/{title}", CreateBook).Methods("POST")
r.HandleFunc("/books/{title}", ReadBook).Methods("GET")
r.HandleFunc("/books/{title}", UpdateBook).Methods("PUT")
r.HandleFunc("/books/{title}", DeleteBook).Methods("DELETE")
Hostnames & Subdomains

Restrict the request handler to specific hostname or subdomain

1
r.HandleFunc("/books/{title}", BookHandler).Host("www.mybookstore.com")
Schemes

Restrict the request handler to http/https

1
2
r.HandleFunc("/secure", SecureHandler).Scheme("https")
r.HandleFunc("/insecure", InsecureHandler).Scheme("http")
Path Prefixes & Subrouters

Restrict the request handler to specific path prefixes.

1
2
3
bookrouter := r.Path.Prefix("/books").subRouter()
bookrouter.HandleFunc("/", AllBooks)
bookrouter.HandleFunc("/{title}", GetBook)

Template

Go’s html/template package provides a rich templating language for HTML
templates. It is mostly used in web applications to display data in a structured
way in a client’s browser. One great benefit of Go’s templating language is the
automatic escaping of data. There is no need to worry about XSS attacka as Go
parses the HTML template and escapes all inputs before displaying it to the
browser.

First Template

Generate Ethereum Keys and Wallet Address

Original

This article is a guide on how to generate an ECDSA private key and derives
its Ethereum address.

Use Openssl and keccak-256sum from a terminal.

SHA3 != keccak. Ethereum is using the keccak-256 algorithm and not the
standard sha3.

Ethereum use keccak-256, it should be noted that it does not follow the
FIPS-202 based standard(aka. SHA-3), which was finalized in August 2015

web3.utils.sha3 uses keccak-256 web3.sha3(string[, option]): keccak-256

Generate the EC private key

First of all we use Openssl ecparam command to generate an elliptic curve
private key. Ethereum standard is to use the secp256k1 curve. The same curve
is used in Bitcion.

This command will print the private key in BEM format(using the wonderful ASN.1
key structure) on stdout.

1
2
3
4
5
6
> openssl ecparam -name secp256k1 -genkey -noout
-----BEGIN EC PRIVATE KEY-----
MHQCAQEEIFDLYO9KuwsC4ej2UsdA4SYk7s3lb8aZuW+B8rjugrMmoAcGBSuBBAAK
oUQDQgAEsNjwhFoLKLXGBxfpMv3ILhzg2FeySRlFhtjfi3s8YFZzJtmckVR3N/YL
JLnUV7w3orZUyAz77k0ebug0ILd1lQ==
-----END EC PRIVATE KEY-----

On its own this command is not very useful for us, but if you pipe it with the
ec command it will display both private and public part in hexadecimal format,
and this is what we want.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
> openssl ecparam -name secp256k1 -genkey -noout | openssl ec -text -noout
read EC key
Private-Key: (256 bit)
priv:
20:80:65:a2:47:ed:be:5d:f4:d8:6f:bd:c0:17:13:
03:f2:3a:76:96:1b:e9:f6:01:38:50:dd:2b:dc:75:
9b:bb
pub:
04:83:6b:35:a0:26:74:3e:82:3a:90:a0:ee:3b:91:
bf:61:5c:6a:75:7e:2b:60:b9:e1:dc:18:26:fd:0d:
d1:61:06:f7:bc:1e:81:79:f6:65:01:5f:43:c6:c8:
1f:39:06:2f:c2:08:6e:d8:49:62:5c:06:e0:46:97:
69:8b:21:85:5e
ASN1 OID: secp256k1

This command decodes the ASN.1 structure and derives the public key from the
private one.

Sometimes, Openssl is adding a null byte(0x00) in front of the private part, I
don’t know why it does that but you have to trim any leading zero bytes in
order to use it with Ethereum.

The private key must be 32 bytes and not begin with 0x00 and the public
one must be uncompressed and 64 bytes long or 65 with the constant (0x04)
prefix.

Derive the Ethereum address from the public key

The public key is what we need in order to derive its Ethereum address. Every EC
Public key begins with ‘0x04’ prefix byte in order to hash it correctly.

This prefix represents the encoding of the public key:

  • 0x04 - both x and y of the elliptic curve point follows
  • 0x02, 0x03 - only x follows (y is either odd or even depending on the
    prefix)

Use any method you like to get it in the form of an hexadecimal string(without
line return nor semicolon)

1
2
3
4
# Extract the public key and remove the EC prefix 0x04
> cat Key | grep pub -A 5 | tail -n +2 |
tr -d '\n[:space:]:' | sed 's/^04//' > pub
836b35a026743e823a90a0ee3b91bf615c6a757e2b60b9e1dc1826fd0dd16106f7bc1e8179f665015f43c6c81f39062fc2086ed849625c06e04697698b21855e

The pub file now contains the hexadecimal value of the public key without the
0x04 prefix.

An Ethereum address is made of 20 bytes(40 hex characters), it is commonly
represented by adding the 0x prefix. In order to derive it, one should take the
keccak-256 hash of the hexadecimal form of a public key, then keep only the
last 20 bytes
(aka get rid of the first 12 bytes)

Simply pass the file containing the public key in hexadecimal format to the
keccak-256sum command. Do not forget to use the ‘-x’ option in order to
interpret it as hexadecimal and not a simple string.

1
2
3
# Generate the hash and take the address part
> cat pub | keccak-256sum -x -l | tr -d ' -' | tail -c 41
0bed7abd61247635c1973eb38474a2516ed1d884

Which gives us the Ethereum address
0x0bed7abd61247635c1973eb38474a2516ed1d884.

CAUTION: if your final address looks like
0xdcc703c0E500B653Ca82273B7BFAd8045D85a470, this means you have hashed an
empty public key. Sending funds to this address will lock them forever.

Caution on Int8Array

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
let t = crypto.getRandomValues(new Int8Array(3))
// Int8Array(3) [-15, -17, -90]

// t is not typeof Array
Array.isArray(t)
// false

// element in Int8Array must be type Int8
// method map returns array of same type as origin array
let _t = []
t.map(i => {
let _tmp = i.toString(16)
_t.push(_tmp)
return _tmp
})
// Int8Array(3) [0, -11, 0]
// _t: ['-f', '-11', '-5a']

t[0] = 'a'
// t: Int8Array(3) [0, -17, -90]

Client Wallet Inspection

React-Lodable

react-lodable is a higher order component for loading components with dynamic
imports.

Code-splitting is the process of taking one large bundle containing your entire
app, and splitting them up into multiple smaller bundles which contain seperate
parts of your app.

This might seem difficult to do, but tools like Webpack have this built in, and
React Loadable is designed to make is super simple.

Route-based splitting vs. Component-based splitting

A common piece of advice you will see is to break your app into seperate routes
and load each one asynchronously. This seems to work well enough for many apps
– a a user, clicking a link and waiting for a page to load is a familiar
experience on the web.

Namely, a route is simple a component.

But in fact there are more places than just routes where you can pretty easily
split apart your app: Modals, Tabs, and many more UI Components hide content
until the user has done something to reveal it.

Example: Maybe your app ahs a map buried inside of a tab component. Why would
you load a massive mapping library for the parent route every time the user
never to to that tab.

React Loadable is a small library that makes component-centric code splitting
incredibly easy in React.

Loadable is a higher-order component(a function that returns a component)
which lets you dynamically load any module before rendering it into your app.

We can make it by dynamic import

1
2
3
4
5
6
7
import Bar from './components/Bar'

class Foo extends React.Component {
render() {
return <Bar />
}
}

=>

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Foo extends React.Component {
state = {
Bar: null,
}
componentWillMount() {
import('./components/Bar').then(Bar => this.setState({ Bar }))
}
render() {
let { Bar } = this.state
if (!Bar) {
return <div>Loading...</div>
} else {
return <Bar />
}
}
}

But that’s whole bunch of work, and it doesn’t even handle a bunch of caess.
What about when import() fails? What about server-side rendering?

react-loadable considers the unexpected cases.

1
2
3
4
5
6
7
8
9
10
11
12
import Loadable from 'react-loadable'

const LoadableBar = Loadable({
loader: () => import('./components/Bar'),
loading: () => <div>Loading</div>,
})

class Foo extends React.Component {
render() {
return <LoadableBar />
}
}

When you use import() with webpack2, it will
automatically code-split for
you with no additional configuration.

Define your Loading Component

1
2
3
function Loading() {
return <div>Loading</div>
}

When your loader fails, your Loading component will receive an error prop which
will be true(otherwise it will be false).

1
2
3
4
5
6
function Loading(props) {
if (props.error) {
return <div>Error</div>
}
return <div>Loading...</div>
}

Sometimes components load really quickly(<200ms) and the loading screen only
quickly flashes on the screen.

Your loading component will get a pastDelay props which will only be true
once the component has taken longer to load than a set delay.

1
2
3
4
5
6
7
8
9
10
function Loading(props) {
if (props.error) {
return <div>Error</div>
} else if (props.pastDelay) {
// only show loading when it takes longer than 200ms
return <div>Loading</div>
} else {
return null
}
}

This delay defaults to 200ms, but you can customize the delay in Loadable.

1
2
3
4
5
Loadable({
loader: () => import('./components/Bar'),
loading: () => Loading,
delay: 300,
})

Timing out when the loader is taking too long

Sometimes network connections suck and never resolve or fail, they just hang
there forever. This sucks for the user because they won’t know if it should
always take this long, or if they should try refreshing.

The Loading component will receive a timeOut prop which will be set to true
when the loader has timed out.

1
2
3
4
5
6
7
8
9
10
11
function Loading(props) {
if (props.error) {
return <div>Error</div>
} else if (props.timedOut) {
return <div>Timed out</div>
} else if (props.pastDelay) {
return <div>Loading</div>
} else {
return null
}
}

This feature is disabled by default, you can pass a timeout option to
Loadable to enable it.

By default Loadable will render the default export of the returned module.
If you want to customize this behavior you can use the render option.

1
2
3
4
5
6
7
Loadable({
loader: () => import('./myComponent'),
render(loaded, props) {
let Component = loaded.namedExport
return <Component {...props} />
},
})

You can do whatever you want within loader() as long as it returns a promise
and you are able to render everything

You can load multiple resources in parallel with Loadable.Map

1
2
3
4
5
6
7
8
9
10
11
Loadable.Map({
loader: {
Bar: () => import('./Bar'),
i18n: () => fetch('./i18n/bar.json').then(res => res.json()),
},
render(loaded, props) {
let Bar = loaded.Bar.default
let i18n = loaded.i18n
return <Bar {...props} i18n={i18n} />
},
})

As an optimization, you can also decide to preload a component before it gets
rendered.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
const LoadableBar = Loadable({
loader: () => import('./Bar'),
loading: Loading,
})

class MyComponent extends React.Component {
state = {
showBar: false,
}

onClick = () => {
this.setState({ showBar: true })
}

onMouseOver = () => {
LoadableBar.preload()
}

render() {
return (
<div>
<button onClick={this.onClick} onMouseOver={this.onMouseOver}>
showbar
</button>
{this.state.showBar && <LoadableBar />}
</div>
)
}
}

Server-side rendering to see Github

Difference Between Contract Calling in Web3

After we get the instance of Contract(testInstance), we can invoke its method by three ways:

  • testInstance.testFunc.sendTransaction()

    • It will make a transaction which will be broadcasted into the net, use gas and return the txHash
  • testInstance.testFunc.call()

    • Call the contract function in VM, no broadcast and no gas used, return the response from method
  • testInstance.testFunc()

    • If the testFunc is signified constant, which means it won’t change the state on chain, it won’t be executed(web3 will invoke it by .call()). If the testFunc is not constant, sendTransaction() will be invoked.

JSON-PRC

Common Patterns in Contract

Withdrawal from Contract

The recommended method of sending funds after an effect is using the withdrawal pattern.

Although the most intuitive method of sending Ether, as a result of an effect, is a direct send call, this is not recommended as it introduces a potential security risk.

This is an example of the withdrawal pattern in practice in a contract where the goal is to send the most money to the contract in order to become the “richest”.

In the following contract, if you are usurped as the richest, you will receive the funds of the person who has gone on to be the new richest:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
pragma solidity ^0.4.11;

contract WithdrawalContract {
address public richest;
uint public mostSent;

mapping(address => uint) public pendingWithdrawals;

function WithdrawalContract() payable {
richest = msg.sender;
mostSent = msg.value;
}

function becomeRichest() payable returns (bool) {
if (msg.value > mostSent) {
pendingWithdrawals[msg.sender] += msg.value;
richest = msg.sender;
mostSent = msg.value;
return true;
} else {
return false;
}
}

function withdraw() {
uint amount = pendingWithdrawals[msg.sender];
pendingWithdrawals[msg.sender] = 0;
msg.sender.transfer(amount);
}
}

In the example above, if the Contract is attacked and the withdraw method stuck(for example, msg.sender.transfer(amount) fails), the Contract will keep working.

Restricting Access

Restricting access is a common pattern for contract.

Note that you can never restrict any human or computer from reading the content of your transactions or your contract’s state. You can make it a bit harder by using encryption, but if your contract is supposed to read the data, so will everyone else.

You can restrict read access to your contract’s state by other contract. This is actually the default unless you declare make your state variables public.

Furthermore, you can restrict who can make modifications to your contract’s state or call your contract’s functions.

The use of function modifier makes thse restrictions highly readable.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
pragma solidity ^0.4.11;

contract AccessRestriction {
address public owner = msg.sender;
uint public creationTime = now;

modifier onlyBy(address _account) {
require(msg.sender == _account);
_;
}

function changeOwner(address _newOwner) onlyBy(owner) {
owner = _newOwner;
}

modifier onlyAfter(uint _time) {
require(now > _time);
_;
}

function disown() onlyBy(owner) onlyAfter(creationTime + 6 weeks) {
delete owner;
}

modifier costs(uint _amount) {
require(msg.value > _amount);
_;
if (msg.value > _amount)
msg.sender.send(msg.value - _amount);
}

function forceOwnerChange(address _newOwner) costs(200 ether) {
owner = _newOwner;
if (uint(owner) & 0 == 1)
return;
}
}

State Machine

Contracts often act as a state machine, which means that they have certain stage in which they behave differently or in which different functions can be called.

A function call often ends a stage and transitions the contract into the next stage.

Function modifiers can be used in this situation to model the states and guard against incorrect usage of the contract.

In the following example, the modifier atStage ensures that the function can only be called at a certain stage, automatic timed transitions are handled by the modifier timeTransitions, which should be used for all functions.

Modifier Order Matters: If atStage is combined with timedTransitions, make sure that you mention it after the latter, so that the new stage is taken into account.

Finally, the modifier transitionNext can be used to automatically go to the next stage when the function finishes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
pragma solidity ^0.4.11;

contract StateMachine {
enum Stages {
AcceptingBlindedBids,
RevealBids,
AnotherStage,
AreWeDoneYet,
Finished
}

Stages public stage = Stages.AcceptingBlindedBids;

uint public creationTime = now;

modifier atStage(Stages _stage) {
require(stage == _stage);
_;
}

function nextStage() internal {
stage = Stages(uint(stage) + 1);
}

modifier timedTransitions() {
if (stage == Stages.AcceptingBlindedBids && now >= creationTime + 10 days)
nextStage();
if (stage == Stages.RevealBids && now >= creationTime + 12 days)
nextStage();
_;
}

function bid()
payable
timedTransitions
atStage(Stages.AcceptingBlindedBids)
{
//
}

function reveal()
timedTransitions
atStage(Stages.RevealBids)
{
//
}
}

Solidity Style Guide

Layout

4 spaces per indentation level

Use spaces for indentation

Use blank lines

1
2
3
4
5
6
7
contract A {
// ...
}

contract B {
// ...
}

Blank lines may be omitted between groups of related one-liners

1
2
3
4
contract A {
function spam();
function ham();
}

Use UTF-8 or ASCII encoding

Place import statements at the top of the file

Functions should be be grouped according to their visibility and ordered:

  • constructor

  • fallback function (if exists)

  • external

  • public

  • internal

  • private

Within a grouping, place the constant functions last.

Use white spaces in expressions

Avoid extraneous whitespaces inthe following situations: immediately inside parenthesis, brackets or braces, with the exception of single line function declaration.

1
2
3
4
5
spam(ham[1], Coin({name: "han"})):

// exception

function singleLine() { spam() }

More than one space around an assignment or other operator to align with

1
2
3
x = 1;
y = 2;
long_variable = 3;

Don’t include a whitespace in the fallback function

1
2
3
4
5
6
7
function() {
// ...
}
// not
function () {
// ...
}

For control structure whose body contains a single statement, omitting the braces is ok if the statement is contained on a single line

1
2
3
4
if (x < 10)
x += 1;
else
x -= 1;

Function declaration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
function increment(uint x) returns (uint) {
return x + 1;
}
// visibility precedes custom modifier
function increment(uint x) public onlyOwner returns (uint) {
return x + 1;
}
// long parameters list
function increment(
uint x1,
uint x2,
uint x3,
uint x4,
uint x5,
uint x6,
) {
// ...
}
// long visibility modifier list
function increment (uint x)
public
onlyOwner
priced
{
// ...
}

Stirng should be quoted with double-quotes.

Contract and Library Names should be named using the CapWords style.

1
2
SimpleToken,
SmartBank,

Event Names should use CapWords

Function Names should use CamelCase

Function Arguments should use CamelCase

Local and State Variables should use CamelCase

Constants should be named with all captial letters with underscores seperating words: MAX_BLOCKS

Modifier should use CamelCase

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×