Creating TAR Archives

TAR is a linux command that allows you to archive entire files & directories into one file that can be moved quickly to another location. On paper it creates a tape archive (TARBALL) of your files, allowing you to extract them later on and compress their contents via BZIP.

Creating a .TAR

tar -cvvf backup.tar work

…creates a tar file named work.tar which contains everything in work directory and recursively everything beyond

tar -cjvf backup.tbz work

…adding -j enables tar to compress files & directories with bzip, backing up everything in the work directory and everything beyond. note the different extension used -> .tbz denoting it’s a compressed archive

Extracting a .TAR

tar -xvvf backup.tar

…extracts / untar’s everything from the work.tar tarball inside the directory your in

Extract TarBall

If bzipped, extract with…

tar -xjvf backup.tgz

Or gzipped, extract with…

tar -xcvf backup.tar.gz

Gzipped ?

GZip is essentially a free version of winzip without the you-gotta-pay-for-it stamp. Now used on it’s own it works on single files turning them into gzipped .gz files, with TAR and it allows you to gzip tons of files (see examples above).

To gzip a file do…

gzip myfile.txt

…this gzips it to myfile.gz

To extract the gzipped file do…

gunzip myfile.gz

Or Simply…

Archive & Compress directory I’m in

tar -zcvf backup.tgz . 

…Then Extract it

tar -xzvf backup.tgz

MySQL Storage Engines

MySQL is a very fast and very flexible multi-user / multi-threaded database system used most popularly for Web Applications and offers the simple startup a good platform to build from at very little cost.

Supporting the ANSI 99 query set, stored procedures, cursors, triggers, updatable views, text indexing, SSL and even Database Clustering, it has grown up from it’s relatively meager beginnings.

MySQL Storage Engines

MySQL offers different ways to handle and manage your data on top of the standard collation types.  Allowing you to choose an engine that closely  fits your company needs.

So if you’re a large archive house that only needs to query old records you can opt for the ARCHIVE engine or if your accessing many remote sources you can choose the FEDERATED engine.  The key here is that each one is designed around a specific daily function, allowing you to optimize your hardware  for the best performance.

Here’s a look at what’s offered,


(default storage engine, best performance overall)

  • Default install: Yes
  • Data limitations: None
  • Index limitations: 64 indexes per table (32 pre 4.1.2); max 16 columns per index
  • Transaction support: No
  • Locking level: Table


(allows to combine a number of identical tables into one)

  • Data limitations: Underlying tables must be MyISAM
  • Index limitations: N/A
  • Transaction support: No
  • Locking level: Table


(stores all data in memory, if power failure, you lose it all good for quick access, calculations, rapid temp tables)

  • Data limitations: BLOB and TEXT types not supported
  • Index limitations: None
  • Transaction support: No
  • Locking level: Table


(allows remote data access, combining many sources into one system)

  • Data limitations: Limited by remote database
  • Index limitations: N/A
  • Transaction support: No
  • Locking level: No


(insert & select only supported, compressed, good for logs, old data)

  • Data limitations: Data can only be inserted (no updates)
  • Index limitations: N/A
  • Transaction support: No
  • Locking level: N/A


(stores as comma-separated data, good for data transport)

  • Data limitations: None
  • Index limitations: Indexing is not supported
  • Transaction support: No
  • Locking level: Table


(allows you to test out possible data structures, schemas)

  • Data limitations: No data is stored, but statements are written to the binary log (and therefore distributed to slave databases)
  • Index limitations: N/A
  • Transaction support: No
  • Locking level: N/A


(original engine, included only for backwards compatibility)

  • Data limitations: Limited maximum database size (4GB)
  • Index limitations: Maximum 16 indexes per table, 16 parts per key
  • Transaction support: No
  • Locking level: Table


(hash-based storage engine, very quick to access & recover great for accessing data that does not change much, due to it’s table locking)

  • Data limitations: None
  • Index limitations: Max 31 indexes per table, 16 columns per index; max key size 1024 bytes
  • Transaction support: Yes
  • Locking level: Page (8192 bytes)


(based on myisam + adds database cacheing & indexing, in memory and disk, very fast recovery, less table-locking issues, speeds up recovery & storage. there is a management overhead with InnoDB that requires your system to be optimised to use it but great if you go that extra mile)

  • Data limitations: None
  • Index limitations: None
  • Transaction support: Yes (ACID compliant)
  • Locking level: Row

MySQL Commands

Here’s a collection of useful commands when working with the MySQL Database Server, either on the terminal or directly inside the database console.

Command Line

**mysql -u root @localhost**

…. login to mysql server with username ‘root’, host = localhost, this
will drop you into a sql console where you can fire off common SQL
queries & commands (e.g. select * from users, create table users…)

mysqladmin -u root password [mysqlpassword]

…. change the ‘root’ password

mysqladmin -u root create sessions_development

…. using ‘root’ account, create database ‘sessions_development’

mysqladmin -u root drop sessions_development

…. using ‘root’ account, delete database ‘sessions_development’

mysqldump -u root -ppassword ---opt all.sql

…. backup all databases to disk

mysqldump -u root mydb > mydb.sql

…. backup only database ‘mydb’ to disk

mysql -u username -ppassword mydb < mydb.sql

…. restore database mydb from disk

Console Queries

CREATE TABLE new_tbl SELECT * FROM orig_tbl;

…. create one table from the results of a SELECT query


…. creates a new table ‘people’ with an auto-incrementing
(AUTO_INCREMENT) ‘id’ field that is the primary key (PRIMARY KEY) and
can’t be null (NOT NULL), along with a ‘fullname’ variable text string

INSERT INTO goods (price) VALUES (1.99);

…. insert a new record into goods with the field ‘price’ of 1.99

UPDATE goods SET price = 2.99 WHERE name = 'shampoo';

…. update ‘price’ value for record with ‘name’ of shampoo in ‘goods’


…. conditionally only delete the table ‘goods’ if it exists

SHOW databases;

…. list all databases on server

USE mydb;

…. switch to another database

DESC goods;

…. show table definition for ‘goods’ table


…. show the sql syntax for creating the ‘goods’ table


…. to see all of table ‘goods’ field formats


…. update all database permissions & privileges


…. commit all pending transactions


…. rollback previous transaction

Content Delivery Types


Sharding you analyze the users pulling the biggest load off your database and separate them into shards so rather than User A hitting the main server, they hit a shard of that data server, same goes with User B; in essence you separate the people requesting the most out to individual boxes rather than the main one.  Employ an army of cheap Linux boxes to create this modified farm and balance the data requests strategically out so they’re evenly hit.

It’s a federated model, groups of users are stored together in boxes of shards.

So if one box goes down, the others still operate.  The work is shared out among your virtual server farm, you get more write performance and you reduce the bottleneck, but you also work out where you’re main draw is an share that out so one guy isn’t left doing all the work.

There are some disadvantages in going this way but it’s a good start in solving a potential problem as your site grows.

Load Balancing

You can also employ Linux’s Native Kernel Load Balancer to help (Google this), plus there are two packages available for the O/S to help in this area:


Clustering may also be a good thing to look at, which is like sharding but simpler in that you build a farm of servers and load balance the users across them.  So the first user hits box 1, then the next user hits box 2, and so on; once each box has a user you go back to box 1 and add a user, and so on balancing the load out.


Content Delivery Network’s are widely available. The process takes your content and seeds it onto seperate servers across the world, so if someone requests something they receive it from the closest source near their location.

OOP JavaScript

Continuing my breakdown of Functional Programming I’m going to go over some concepts from Object Oriented Programming using ES6.

Class & Object

  • Objects is a thing which can be seen or touched, in software we try to represent the same real-life thing with objects.
  • Object Oriented Programing is nothing but code around that object.
  • Class is not an object, it’s like a blueprint which can generate objects. So, class helps to set the classification of objects with its properties and capabilities.
  • Any number of instance can be created from a class, each instance is called Object.
class Car {
  /* Properties and Actions */
let figo = new Car();

console.log(typeof Car);            // function
console.log(typeof figo);           // object
console.log(figo instanceof Car);   // true

Constructor, Properties and Methods

  • A Constructor is a function that gets called automatically when you create an instance of the class.
  • Instance variables get created and initialized using constructor.
  • Instance variables are nothing but called properties of the object.
  • Methods are again functions which attached with instance and all instance created from the same class will have those methods or actions.
  • Accessing properties and methods inside class, we need this keyword.
class App {
  // constructor
  constructor(name) { = name

  // arrow function
  checkUser = (id) => {

let myApp = new App('john');
console.log( // returns john
myApp.checkUser('424') // returns 424

Static Properties & Methods

The static keyword defines a static method for a class. Static methods aren’t called on instances of the class. Instead, they’re called on the class itself. These are often utility functions, such as functions to create or clone objects.

Static method calls are made directly on the class and are not callable on instances of the class. Static methods are often used to create utility functions.

class App {
  // constructor
  constructor() { = 'john'

  static staticMethod() {
    return 'static method'

let myApp = new App();
// => static method
// => static method

Getters & Setters

With getters and setters we different things on getting a value and setting something with functions, perfect for validation.

class App {
  constructor(name) {
      this._name = name;

  // getter
  get name() {
      return this._name;

  // setter
  set name(val) {
      this._name = val;

let myApp = new App('john');
console.log( // => john = 'peter' // set value
console.log( // => peter


class Meetup {
    organise() {
        console.log('Organising Meetup');
    static getMeetupFounderDetails() {
        console.log("Meetup Founder Details");

class techMeet extends Meetup {
    organise() {
        console.log('Organising techMeet');
    static getMeetupFounderDetails() {
        console.log("techMeet Founder Details");

let js = new techMeet();

/* Output:
Organising techMeet
Organising Meetup */

/* Output:
techMeet Founder Details
Meetup Founder Details */