Modified Google Camera app by Arnova8G2. Legend: » Bold + red: One of the suggested versions. » Bold: Known to be stable. » Normal: Neutral. Player API version 1.0 for Minecraft 1.11.2 Aug 14, 2017. Minecraft 1.10 R. Player API version 1.1 for Minecraft 1.10.2 Mar 7, 2017 ©2020 Twitch. Hikvision iVMS-4200 AC V1.0.0.11build190521 -2019-05-30: Hikvision iVMS-4200 AC V1.0.1.6 -2019-07-22: Hikvision iVMS-4200 AC V1.0.1.9 -2019-09-02: Hikvision iVMS-4200 AC V1.1.0.10 -2019-12-17: Hikvision iVMS-4200 AC V1.3.0.7 -2020-06-23: Hikvision iVMS-4200 AC V1.3.1.6 -2020-09-02: Hikvision iVMS-4200 Lite V1.0.0.4(Windows,Multilingual) -2018-12-27.
See Section 6.1, “The Hibari Server’s Implementation of the UBF Protocol Stack. Must be version 1.5.4 or newer. 1.7.3.4 is the version most recently tested for. This site refers to AngularJS (v1.x). Go to the latest Angular. This site and all of its contents are referring to AngularJS (version 1.x), if you are looking for the latest Angular, please visit angular.io.
This section covers the following topics to help you get up andrunning with Hibari:
- link:#system-requirements[System Requirements]
- link:#required-software[Required Third Party Software]
- link:#download-hibari[Downloading Hibari]
- link:#installing-single-node[Installing a Single-Node Hibari System]
- link:#starting-single-node[Starting and Stopping a Single-Node Hibari System]
- link:#installing-multi-node[Installing a Multi-Node Hibari Cluster]
- link:#starting-multi-node[Starting and Stopping a Multi-Node Hibari Cluster]
- link:#creating-tables[Creating New Tables]
[[system-requirements]]
System Requirements¶
Hibari will run on any OS that the Erlang VM supports, which includesmost Unix and Unix-like systems, Windows, and Mac OS X. SeeImplementation and Ports of Erlangfrom the official Erlang documentation for furtherinformation.
For guidance on hardware requirements in a production environment, seelink:hibari-sysadmin-guide.en.html#brick-hardware[Notes on BrickHardware] in the Hibari System Administrator’s Guide.
[[required-software]]
Required Third-Party Software¶
Hibari’s requirements for third party software depend on whetheryou’re doing a single-node installation or a multi-node installation.
Required Software for a Single-Node Installation:¶
The node on which you plan to install Hibari must have the following software:
- OpenSSL - http://www.openssl.org/
- Required for Erlang’s “crypto” module
Required Software for a Multi-Node Installation:¶
When you install Hibari on multiple nodes you will use an installertool that simplifies the cluster set-up process. When you use thistool you will identify the hosts on which you want Hibari to beinstalled, and the tool will manage the installation of Hibari ontothose target hosts. You can run the tool itself from one of yourtarget Hibari nodes or from a different machine. There are distinctrequirements for third party software on the “installer node” (themachine from which you run the installer tool) and on the Hibari nodes(the machines on which Hibari will be installed and run.)
Installer Node Required Software¶
The installer node must have the software listed below. If you aremissing any of these items, you can use the provided links fordownloads and installation instructions.
- Bash - http://www.gnu.org/software/bash/
- Expect - http://www.nist.gov/el/msid/expect.cfm
- Perl - http://www.perl.org/
- SSH (client) - http://www.openssh.com/
- Git - http://git-scm.com/
- Must be version 1.5.4 or newer
- If you haven’t yet done so, please configure your email addressand name for Git:
- If you haven’t yet done so, you must sign up for a GitHub account -https://github.com/
There are currently no known version requirements for Bash, Expect,Perl, or SSH.
Hibari Nodes Required Software¶
The nodes on which you plan to install Hibari must have the softwarelisted below.
- SSH (server) - http://www.openssh.com/
- OpenSSL - http://www.openssl.org/
- Required for Erlang’s “crypto” module
[[download-hibari]]
Downloading Hibari¶
Hibari is not yet available as a pre-built release. In the meanwhile,you can build Hibari from source. Follow the instructions in<<HibariBuildingSource>>, and then return to this section to continuethe set-up process.
When you build Hibari your output is two files that you will later usein the set-up process:
- A tarball package hibari-X.Y.Z-DIST-ARCH-WORDSIZE.tgz
- An md5sum file hibari-X.Y.Z-DIST-ARCH-WORDSIZE-md5sum.txt
X.Y.Z is the release version, DIST is the release distribution,ARCH is the release architecture, and WORDSIZE is the releasewordsize.
[[installing-single-node]]
Installing a Single-Node Hibari System¶
A single-node Hibari system will not provide data replication andredundancy in the way that a multi-node Hibari cluster will. However,you may wish to deploy a simple single-node Hibari system for testingand development purposes.
- Create a directory for running Hibari:
- Untar the Hibari tarball package that you created when you builtHibari from source:
Important
On your Hibari node, in the system’s /etc/sysctl.conf file,set vm.swappiness=1. Swappiness is not desirable for an Erlang VM.
[[starting-single-node]]
Starting and Stopping Hibari on a Single Node¶
Starting and Bootstrapping Hibari¶
- Start Hibari:
- If this is the first time you’ve started Hibari, bootstrap the system:
The Hibari bootstrap process starts Hibari’s Admin Server on thesingle node and creates a single table “tab1” serving as Hibari’sdefault table. For information on creating additional tables, seelink:#creating-tables[Creating New Tables].
Verifying Hibari¶
Do these quick checks to verify that your single-node Hibari system isup and running.
- Confirm that you can open the “Hibari Web Administration” page:
- Confirm that you can successfully ping the Hibari node:
IMPORTANT: A single-node Hibari system is hard-coded to listen on thelocalhost address 127.0.0.1. Consequently the Hibari node is reachableonly from the node itself.
Stopping Hibari¶
To stop Hibari:
[[installing-multi-node]]
Installing a Multi-Node Hibari Cluster¶
Before you install Hibari on to the target nodes you must completethese preparation steps:
- Set up required user privileges on the installer node and on thetarget Hibari nodes.
- Download the Cluster installer tool.
- Configure the Cluster installer tool.
Setting Up Your User Privileges¶
The system user ID that you use to perform the installation must bedifferent than the Hibari runtime user. Your installing user account($USER) must be set up as follows:
- $USER must exist on the installer node and also on the target Hibarinodes.
- $USER on the installer node must have SSH private/public keys, withthe SSH agent set up to enable password-less SSH login.
- $USER account must be accessible with password-less SSH login on thetarget Hibari nodes.
- $USER must have password-less sudo access on the target Hibarinodes.
If your installing user account does not currently have the aboveprivileges, follow these steps:
- As the root user, add your installing user ($USER) to the installernode. Then on each of the Hibari nodes, add your installing user andgrant your user password-less sudo access:
Note
If you get a “sudo: sorry, you must have a tty to run sudo” errorwhile testing sudo, try commenting out following line inside of the/etc/sudoers file:
- On the installer node, create a new SSH private/public key for yourinstalling user:
- On each of the Hibari nodes:
- Append an entry for the installer node to the ~/.ssh/known_hostsfile.
- Append an entry for your public SSH key to the~/.ssh/authorized_keys file.
In the example below, the target Hibari nodes are dev1, dev2, anddev3:
Note
If your installer node will be one of the Hibari cluster nodes,make sure that you ssh-copy-id to the installer node also.
- Confirm that password-less SSH access to the each of the Hibarinodes works as expected:
Tip
If you need more help with SSH set-up, checkhttp://inside.mines.edu/~gmurray/HowTo/sshNotes.html.
[[download-cluster]]
Downloading the Cluster Installer Tool¶
“Cluster” is a simple tool for installing, configuring, andbootstrapping a cluster of Hibari nodes. The tool is not part of theHibari package itself, but is available from GitHub.
Note
The Cluster tool should meet the needs of most users. However,this tool’s “target node” recipe is currently Linux-centric(e.g. useradd, userdel, ..). Patches and contributions for other OSand platforms are welcome. For non-Linux deployments, the Clustertool is rather simple so installation can be done manually byfollowing the tool’s recipe.
- Create a working directory into which you will download the Clusterinstaller tool:
- Download the Cluster tool’s Git repository from GitHub:
The download creates a sub-directory clus under which the installertool and various supporting files are stored.
[[config-cluster]]
Configuring the Cluster Installer Tool¶
The Cluster tool requires some basic configuration information thatindicates how you want your Hibari cluster to be set up. You willcreate a simple text file that specifies your desired configuration,and then later use the file as input when you run the Cluster tool.
It’s simplest to create the file in the same working directory inwhich you downloaded the cluster tool. You can give the file any namethat you want; for purposes of these instructions we will use the filename hibari.config.
Below is a sample hibari.config file. The file that you create mustinclude all of these parameters, and the values must be formatted inthe same way as in this example (with parentheses and quotation marksas shown). Parameter descriptions follow the example file.
[[eligible-admin-nodes]]
- ADMIN_NODES
- Host names of the nodes that will be eligible to run the HibariAdmin Server. For complete information on the Admin Server, seelink:hibari-sysadmin-guide.en.html#admin-server-app[The AdminServer Application] in the Hibari System Administrator’s Guide.
- BRICK_NODES
- Host names of the nodes that will serve as Hibari storagebricks. Note that in the sample configuration file above there arethree storage brick nodes (dev1, dev2, and dev3), and these threenodes are each eligible to run the Admin Server.
- BRICKS_PER_CHAIN
- Number of bricks per replication chain. For example, with twobricks per chain there will be two copies of the data stored inthe chain (one copy on each brick); with three bricks per chainthere will be three copies, and so on. For an overview of chainreplication, see link:#chain-replication[Chain Replication forHigh Availability and Strong Consistency] in this document. Forchain replication detail, see the Hibari System Administrator’sGuide.
- ALL_NODES
- This list of all Hibari nodes is the union of ADMIN_NODES andBRICK_NODES.
- ALL_NETA_ADDRS
- As described inlink:hibari-sysadmin-guide.en.html#partition-detector[ThePartition Detector Application] in the Hibari SystemAdministrator’s guide, the nodes in a multi-node Hibari clustershould be connected by two networks, Network A and Network B, inorder to detect and manage network partitions. TheALL_NETA_ADDRS parameter specifies the IP addresses of eachHibari node within Network A, which is the network through whichdata replication and other Erlang communications will takeplace. The list of the IP addresses should correspond in order tohost names you listed in the ALL_NODES setting.
- ALL_NETB_ADDRS
- IP addresses of each Hibari node within Network B. Network B isused only for heartbeat broadcasts that help to detect networkpartitions. The list of the IP addresses should correspond inorder to host names you listed in the ALL_NODES setting.
- ALL_NETA_BCAST
- IP broadcast address for Network A.
- ALL_NETB_BCAST
- IP broadcast address for Network B.
- ALL_NETA_TIEBREAKER
- Within Network A, the IP address for the network monitoringapplication to use as a “tiebreaker” in the event of apartition. If the network monitoring application on a Hibari nodedetermines that Network A is partitioned and Network B is notpartitioned, then if the Network A tiebreaker IP address respondsto a ping, then the local node is on the “correct” side of thepartition. Ideally the tiebreaker should be the address of theLayer 2 switch or Layer 3 router that all Erlang networkdistribution communications flow through.
- ALL_HEART_UDP_PORT
- UDP port for heartbeat listener.
- ALL_HEART_XMIT_UDP_PORT
- UDP port for heartbeat transmitter.
For more detail on network monitoring configuration settings, see thepartition-detector’s OTP application source file(https://github.com/hibari/partition-detector/raw/master/src/partition_detector.app.src).
CAUTION: In a production setting, Network A and Network B should bephysically different networks and network interfaces. However, fortesting and development purposes the same physical network can be usedfor Network A and Network B (as in the sample configuration fileabove).
As final configuration steps, on each Hibari node:
- Make sure that the /etc/hosts file has entries for all Hibari nodesin the cluster. For example:
- In the system’s /etc/sysctl.conf file, set vm.swappiness=1. Swappinessis not desirable for an Erlang VM.
Installing Hibari¶
Coda 2 2 5 18. From your installer node, logged in as the installer user, take thesesteps to create your Hibari cluster:
- In the working directory in which youlink:#download-cluster[downloaded the Cluster tool] andlink:#config-cluster[created your cluster configuration file], placea copy of the Hibari tarball package and md5sum file:
- Create the “hibari” user on all Hibari nodes:
Note
If the “hibari” user already exists on the target nodes, the -foption will forcefully delete and then re-create the “hibari” user.
- Install the Hibari package on all Hibari nodes, via the newlycreated “hibari” user:
Note
By default the Cluster tool installs Hibari into/usr/local/var/lib on the target nodes. If you prefer a differentlocation, before doing the install open the clus.sh script (in yourworking directory, under /clus/priv/) and edit the CT_HOMEBASEDIRvariable.
[[starting-multi-node]]
Starting and Stopping a Multi-Node Hibari Cluster¶
You can use the Cluster installer tool to start and stop yourmulti-node Hibari cluster, working from the same node from which youmanaged the installation process. Note that in each of the Hibaricommands in this section you’ll be referencing the name of thelink:#config-cluster[Cluster tool configuration file] that you createdduring the installation procedure.
Starting and Bootstrapping the Hibari Cluster¶
- Change to the working directory in which you downloaded the Clustertool, then start Hibari on all Hibari nodes via the “hibari” user:
- If this is the first time you’ve started Hibari, bootstrap thesystem via the “hibari” user:
The Hibari bootstrap process starts Hibari’s Admin Server on the firstlink:#eligible-admin-nodes[eligible admin node] and creates a singletable “tab1” serving as Hibari’s default table. For information aboutcreating additional tables, see link:#creating-tables[Creating New Tables].
Note
If bootstrapping fails due to “another_admin_server_running”error, please stop the other Hibari cluster(s) running on the network;or reconfigure the Cluster tool to assignlink:#eligible-admin-nodes[Hibari heartbeat listener ports] that arenot in use by another Hibari cluster or other applications and thenrepeat the cluster installation procedure.
Verifying the Hibari Cluster¶
Do these simple checks to verify that Hibari is up and running.
- Confirm that you can open the “Hibari Web Administration” page:
- Confirm that you can successfully ping each of your Hibari nodes:
Stopping the Hibari Cluster¶
Stop Hibari on all Hibari nodes via the “hibari” user:
[[creating-tables]]
Creating New Tables¶
The simplest way to create a new table is via the Admin Server’sGUI. Open http://localhost:23080/ and click the “Add a table” link.In addition to the GUI, the hibari-admin tool can also be used tocreate a new table. See the hibari-admin tool for usage details.
Note
For information about creating tables using the administrativeAPI, see the Hibari System Administrator’s Guide. Cook&039 n recipe organizer 12 14 6v.
When adding a table through the GUI, you have these tableconfiguration options:
- Local
- Boolean. If true, all bricks for storing the new table’s data willbe created on the local node, i.e. the node that’s running theAdmin Server. If false, then the “NodeList” field is used tospecify which cluster nodes the new bricks should use.
- BigData
- Boolean. If true, value blobs will be stored on disk.
- DiskLogging
- Boolean. If true, all updates will be written to the write-aheadlog for persistence. If false, bricks will run faster but at theexpense of data loss in a cluster-wide power failure.
- SyncWrites
- Boolean. If true, all writes to the write-ahead log will beflushed to stable storage via the fsync(2) system call. Iffalse, bricks will run faster but at the expense of data loss in acluster-wide power failure.
- VarPrefix
- Boolean. If true, then a variable-length prefix of the key will beused as input for the consistent hashing function. If false, theentire key will be used.
Many applications can benefit from using a variable-length orfixed-length prefix hashing scheme. As an example, consider anapplication that maintains state for various users. The app wishes touse micro-transactions to update various keys (in the same table)related to that user. The table can be created to useVarPrefix=true, together with VarPrefixSeparator=47 (ASCII 47 isthe forward slash character) and VarPrefixNumSeparator=2, to createa hashing scheme that will guarantee that keys /FooUser/summary and/FooUser/thing1 and /FooUser/thing9 are all stored by the samechain.
Note
The HTTP interface for creating tables does not expose thefixed-length key prefix scheme. The Erlang API must be used in thiscase.
- VarPrefixSeparator
- Integer. Define the character used for variable-length key prefixcalculation. Note that the default value of ASCII 47 (the “/”character), or any other character, does not imply any UNIX/POSIXstyle file or directory semantics.
- VarPrefixNumSeparators
- Integer. Define the number of VarPrefixSeparator bytes, and allbytes in between, used for consistent hashing. IfVarPrefixSeparator=47 and VarPrefixNumSeparators=3, then for akey such as /foo/bar/baz, the prefix used for consistent hashingwill be /foo/bar/.
- Bricks
- Integer. If Local=true (see above), then this integer definesthe total number of logical bricks that will be created on thelocal node. This value is ignored if Local=false.
- BPC
- Integer. Define the number of bricks per chain.
The algorithm used for creating chain -> brick mapping is based on a“striping” principle: enough chains are laid across bricks in astripe-wise manner so that all nodes (aka physical bricks) will havethe same number of logical bricks in head, middle, and tail roles.See the example in the Hibari System Administrator’s Guide oflink:hibari-sysadmin-guide.en.html#3-chains-striped-across-3-bricks[3chains striped across three nodes].
The Erlang API must be used to create tables with other chain layoutpatterns.
Hibari V1 5 6 0
- NodeList
- Comma-separated string. If Local=false, specify the list ofnodes that will run logical bricks for the new table. Each nodein the comma-separated list should take the formNodeName@HostName. For example, use hibari1@machine-a,hibari1@machine-b,hibari1@machine-c to specify three nodes.
- NumNodesPerBlock
- Integer. If Local=false, then this integer will affect thestriping behavior of the default chain striping algorithm. Thisvalue must be zero (i.e. this parameter is ignored) or a multipleof the BPC parameter.
For example, if NodeList contains nodes A, B, C, D, E, and F, thenthe following striping patterns would be used:
Hibari V1 5 6 Cyl
- NumNodesPerBlock=0 would stripe across all 6 nodes for 6chains total.
- NumNodesPerBlock=2 and BPC=2 would stripe 2 chains acrossnodes A & B, 2 chains across C & D, and 2 chains across E & F.
- NumNodesPerBlock=3 and BPC=3 would stripe 3 chains acrossnodes A & B & C and 3 chains across D & E & F.
- BlockMultFactor
- Integer. If Local=false, then this integer will affect thestriping behavior of the default chain striping algorithm. Thisvalue must be zero (i.e. this parameter is ignored) or greaterthan zero.
For example, if NodeList contains nodes A, B, C, D, E, and F, thenthe following striping patterns would be used:
Hibari V1 5 61
- NumNodesPerBlock=0 and BlockMultFactor=0 would stripeacross all 6 nodes for 6 chains total.
- NumNodesPerBlock=2 and BlockMultFactor=5 and BPC=2 wouldstripe 2*5=10 chains across nodes A & B, 2*5=10 chains across C& D, and 2*5=10 chains across E & F, for a total of 30 chains.
- NumNodesPerBlock=3 and BlockMultFactor=4 and BPC=3 wouldstripe 3*4=12 chains across nodes A & B & C and 3*4=12 chainsacross D & E & F, for a total of 24 chains.