Projects

Some of the latter projects are worth a closer look.

Labgent.com   Non Disclosed *Fresh*
Antipasti.be   Recipes blog
Fake Screen   Fake Screens for CGI (film)
Tickee   Online Ticketing Sales Service
T.P.O.   my Stage Management principal
The Fat Cow   Pop-up restaurant
 

Koen Betsens

Soms mag het even blijven hangen.

layers-of-gals

3-Layer Structure

02/03/2015

There is a magical beast I’ve been working on in a span of multiple projects, together with some beautiful minds, and we’ve given it a name: the 3-Layer Structure.
In short, a 3-Layer Structure allows small development teams to build a API-centric, scalable and above all secure project. All you need is 3 VPS servers * and all open source software. The beauty lies in the minimal cost and effort, and the proof is in the ease of explaining it.
(* goes from $5 a pop at Digital Ocean – affiliate link, save $10 bucks)

Layer 1: App Cloud.

All your apps can be static, in CDN and even 100% javascript. All authentication and dynamic data is retrieved through the API and stored front-end (eg IndexedDB) if you feel like.

Layer 2: API

Your API is the central hub. What you need here is a super light router to dispatch all requests and responses – both endpoints as authentication – to a Message Queue (foreground enabled). You might also want to consider to store the API Docs and Authentication html templates on this machine, to prevent fragmentation.
Never connect your API to your DB! Your API logic should be deployed from a repo, so you can disable ftp and any other access to your machine, because your API will be the favorite address for intrusion attempts. Automated attacks won’t do dramatic harm because of the logic segmentation, but still, let’s keep security in simplicity.

Layer 3: Business Logic (aka Workers)

You can scale this layer in any direction you want, tailored to your project. You can have 1 little machine running some worker nodes, or you can have multiple VPS’s, paired with a cluster of DB machines for central data storage, in any flavour, whatever you like.
Your worker connects to the MQ server on the API, handles the job, and sends the response back. Since the API doesn’t know anything about the workers (it only cares about his Message Queue), there is no way for voyeurs to find out where your business critical logic and DB are running. Thus, safety by simplicity.

In short, you provide unlimited, easy to maintain scalability on frontend and backend level, with a light-weight gatekeeper in the middle.

You can set this up in a Laravel flavoured LEMP setup. Or in Python, with the Pyramid Framework. Or simply in Node with LoopBack.
Let’s take a look on how it’s done with Laravel.

1. Set up your servers ( *)
2. Set up the repo flow
3. Set up the Frontend (Node.js alternative)
4. Set up the API
5. Set up the Worker
6. Set up the MQ and Hello World!

The Spiredeck series is also based on the 3-layer structure. And is 100% javascript.
Spiredeck – Creating a hybrid frontend app
Spiredeck – API Enabled
Spiredeck – The Pack of Workers

For Python lovers, the former Tick.ee Project (now open source) provides a rough reference.

We’ve been working by this architecture since 2011 (before MQ was common, “Cloud” was not yet a marketing term and the default api response was xml), and never looked back.

Gearman-gears

JobServer package

02/03/2015

The Jobserver package is a Gearman dispatch implementation for Laravel. The package functions as an abstraction layer to send both foreground as background jobs to a MQ server (Gearman in this case).
The Jobserver package is taylored to use in the 3-layer structure.

Jobserver source (Github)

Jobserver package (Packagist)

Installation

Add the Jobserver package to the composer requirements of your API project. In project/composer.json, add the highlighted entry:

{
	"name": "project",
	"description": "Project",
	"require": {
		"laravel/framework": "4.2.*",
		"koenbetsens/jobserver": "dev-master"
	}
}

The package has 2 modes: “Gearman mode” (default) and “Synchronized mode” to enable functional communication without an actual MQ install.

Gearman mode

If your Gearman server runs on the same machine as your API project, Jobserver will work out-of-the-box. To send your jobs to an external queue, create a config file named app/config/gearman.php to store your Gearman server location. You can add multiple servers, or single servers for multiple environments, by creating config environment-folders

<?php

return array
(
	/*
	|--------------------------------------------------------------------------
	| Gearman Settings
	|--------------------------------------------------------------------------
	|
	| Gearman Servers must be configured in their environments.
	|
	*/
	
	'servers' => array
	(
		'gearman.project-url.ext' => '4730'
	)

);

Synchronized (local) mode

To save yourself the hassle of installing Gearman locally for development, you can enabled the sync mode for php-cli based execution of the jobload. This mode will only work when debug mode is on, to prevent sync mode in production-level environments.

Configuration
The configuration can be added directly to your app/config/local/app.php file. Your worker path points to the job functions directory (usually named /jobs)

<?php

return array(

	/*
	|--------------------------------------------------------------------------
	| Application Debug Mode
	|--------------------------------------------------------------------------
	|
	| When your application is in debug mode, detailed error messages with
	| stack traces will be shown on every error that occurs within your
	| application. If disabled, a simple generic error page is shown.
	|
	*/

	'debug' => true,
	
	/*
	|--------------------------------------------------------------------------
	| Synchronized
	|--------------------------------------------------------------------------
	| If set, the API will call the worker directly, instead of using a jobserver.
	*/
	
	'synchronized' => true,
	
	/*
	|--------------------------------------------------------------------------
	| Worker path
	|--------------------------------------------------------------------------
	| This path is used by the local Jobserver to sync a queue request.
	*/
	'worker' => array
	(
		'path' => '/path/to/project/worker/jobs'
	)
);

Ghostjob model
We need to emulate a job model for syncronized usage. Copy the /models/Ghostjob.php file from the Jobserver package to the app/models folder in your worker project. Now you can selectively add the Ghostjob evaluation in the job-function files like this:

/**
 *  Some Job Function
 *  Catch and execute jobs
 *
 *  @param  object  $job
 *  @return string
 */
function someJobFunction ($job) 
{
	return "fubar";
}

/**
 * Sync Check
 * Ghostjob will evaluate and call the job function if "Synced" and not in production.
 * Only add this evaluation to functions you allow to be called.
 */
echo Ghostjob::evaluate ('someJobFunction', $argv);

Make sure your job-function files have the right permissions, and you’re ready to go.

layers-of-gals

3-layer – Set up server environment

02/02/2015

Create 3 servers.
This write-down is based on a Ubuntu LEMP install, with Laravel and Gearman as framework and MQ.

Front-end

Your Front-end environment serves as apps base. It should mostly contain static code.

1 a) Set up VPS
Can be any size, or part of a bigger machine. Make sure the Front-end is not on the same (virtual) machine as the 2 other layers

1 b) Restricted access
Create new users, making sure they all belong to the same group www
– create group

    # groupadd www

– project user (will be used for deployment)

    # useradd -G www project
    # passwd project
    # mkdir /home/project
    # chown project:project /home/project

– team members (no ssh root access will be allowed)

    # useradd -G www colleague
    # passwd colleague
    # mkdir /home/colleague
    # chown colleague:colleague /home/colleague

– disable root access

    # nano /etc/ssh/sshd_config
    > PermitRootLogin no

    # service ssh restart

1 c) Create project directory root
A classic location is

    # mkdir /var/www/html/project
    # chown project:www /var/www/html/project

1 d) Set up nginx
Nginx is a lot lighter then Apache. That’s basicly why.

    # apt-get update
    # apt-get install nginx

All should be running by default. Now you can point nginx to your (default) project.

    # cd /etc/nginx
    # nano sites-available/default
    > root /var/www/html/project/dist;

    # nginx -s reload

1 e) Set up nodejs
If you want to run pretty deploy scripts later on, you’ll probably want Node.

    # sudo apt-get install nodejs
    # sudo apt-get install npm

API

Your API environment is the central gate of your structure. It should be as light as possible. It also is the most likely spot for intrusion attempts, so no sensitive data should be stored on this machine.

Keep in mind
* Don’t connect to the DB on this layer. Ever.
* Close all vulnerable access points like FTP, PMA or exotic ports. The less entry points, the less risk on intrusion overloads.

2 a) Set up VPS (link to 1)

2 b) Restricted access (link to 1)

2 c) Create API directory root
A classic location is

    # mkdir /var/www/html/api
    # chown project:www /var/www/html/api

2 d) Set up nginx
Almost the same like the public project.

    # apt-get update
    # apt-get install nginx

All should be running by default. Now you can point nginx to your api (public is the default Laravel folder).

    # cd /etc/nginx
    # nano sites-available/default
    > root /var/www/html/api/public;
    > index index.html index.htm index.php;

    # nginx -s reload

If you create a POC or Development environment, you might want to install the POC app and API on the same server. In that case, create a new entry for the app, since your default should point to the API. Once created in sies-available, enable it by creating a symbolic link

    # cd /etc/nginx/sites-enabled
    # ln -s /etc/nginx/sites-available/app app
    # nginx -s reload

Note
Actually, setting up nginx to support Laravel requires a bit more configuration. This is a working example:

server {
	listen 80 default_server;
	listen [::]:80 default_server ipv6only=on;
	
	root /var/www/html/api/public;
	index index.php index.htm index.html;
	
	server_name localhost;
	
	try_files $uri $uri/ @rewrite;
	
	location @rewrite {
		rewrite ^/(.*)$ /index.php?_url=/$1;
	}
	
	
	
	error_page 404 /404.html;
	error_page 500 502 503 504 /50x.html;
	location = /50x.html {
		root /usr/share/nginx/html;
	}
	
	location ~ \.php$ {
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		fastcgi_pass unix:/var/run/php5-fpm.sock;
		fastcgi_index index.php;
		fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
		include fastcgi_params;
	}
}

2 e) Set up php or node
Depending on your flavour, don’t forget to set up php and/or node:

    # sudo apt-get install php5-fpm php5-cli
    # sudo apt-get install nodejs
    # sudo apt-get install npm

You might run into the /usr/bin/env: node: No such file or directory error later on. Solve this by adding a symlink.

    # ln -s /usr/bin/nodejs /usr/bin/node

If you use Laravel as API framework, make sure you have mcrypt installed.

    # apt-get install php5-mcrypt
    # ln -s /etc/php5/conf.d/mcrypt.ini /etc/php5/mods-available/mcrypt.ini
    # php5enmod mcrypt
    # service php5-fpm restart
    # sudo service nginx restart

Workers

Your Workers environment holds all business logic and a workers manager. You can easily set up multiple worker vps’es with the same settings. Scale when required only, it will save you service time.

Keep in mind
* Don’t store your worker IP’s in a intrusable location, like the API.

3 a) Set up VPS (link to 1)

3 b) Restricted access (link to 1)

3 c) Create your Worker stages
A sane location is /var/www/workers (dont’t forget to chown)

    # mkdir /var/www/workers/development
    # chown project:www /var/www/workers/development
    # mkdir /var/www/workers/staging
    # chown project:www /var/www/workers/staging
    # mkdir /var/www/workers/stable
    # chown project:www /var/www/workers/stable

3 d) Set up nginx
Almost the same like the public project.

    # apt-get update
    # apt-get install nginx

All should be running by default. Now you can point nginx to your api (public is the default Laravel folder).

    # cd /etc/nginx
    # nano sites-available/default
    > root /var/www/workers/development;
    > index index.html index.htm index.php;

    # nginx -s reload

2 e) Set up php
Your business logic will probably use php, don’t forget to install it to your machine, if it wasn’t built with a LEMP stack already.

    # sudo apt-get install php5-fpm php5-cli

Your machines are ready for deployment now.


Sources:
http://www.cyberciti.biz/faq/unix-create-user-account/
http://askubuntu.com/questions/335961/create-default-home-directory-for-existing-user-in-terminal
http://kb.mediatemple.net/questions/713/How+do+I+disable+SSH+login+for+the+root+user%3F#dv
https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-14-04-lts
https://github.com/joyent/node/issues/3911
https://www.digitalocean.com/community/questions/enable-mcrypt-extension-in-nginx

layers-of-gals

3-Layer – Set up repo flow

02/02/2015

This post is not about the advantages of one versioning flavour over the other. I merely want to provide a guide to set up a repo environment (the 3-layer structure depends on this) as quick as possible. We’ll be using external services for this – let them do the heavy lifting.

We’ll be making a Superadmin as P.O.C.

Github Account

1 a) Get a Private one
Set up an organisation on Github. Make sure you can create private repositories, if your creating a production project.

1 b) Create your 3 repos
Go to your organisation page and click the green new repository button. Make it private, and add a Readme. Do this 3 times:

    api
    Public access point for 3-layer structure
    worker
    Business logic package
    superadmin
    Internal Control Panel

1 c) Get your local copy
Download the app ( mac or win), connect your personal profile and start adding the repo’s by clicking the top left + -> clone button and dropdown. Do this for all 3 the repositories.

Dploy.io deployment

2 a) Get one
Go to Dploy.io and set up a basic account. We’ll be managing 3 repositories, so don’t be a chicken and pay up.

2 b) Integrate your DigitalOcean Droplets
Go to integrations (top center button) and select DigitalOcean. You will be assisted in the authentication token connection process. Really supreme.

2 c) Set up your repositories
Go back to the dashboard and click the Connect a repository button. Connect your Github profile if you haven’t already done so, select the appropriate repos and referring names for the package, and repeat this in total of 3 times.

2 d) Set up your Environments
Browse to each respective repo and set up your Development Environment by clicking the Create Environment & Server button. Name it “Development”, select Automatic and choose the master branch. After you save, you can select your DigitalOcean account, name the deployment environment and select the correct Droplet with your project root folders as defined in the Server Setup. Right under remote path, select Custom settings and give in your project user name, to prevent read/write conflicts. You can go ahead and skip to the Ready. Set. Deploy! confirmation.

If you’re having authentication errors, like Public key authentication failed, please check if dploy.io’s public key is added properly to your server., make sure you have the key stored in your project user ~/.ssh/authorized_keys file.

You’ll be adding Post-deployment commands, a Staging and a Production environment later on.

layers-of-gals

3-Layer – Set up Frontend

02/01/2015

We’ll be making a Superadmin as P.O.C. frontend app.
The goal of this application is to manage our accounts and users. This post will cover the basic setup of the frontend logic, based on Node.js, Bower, Grunt, Backbone.js, Require.js and Bootstrap.

I’m just going to assume you own a mac.

Manage Packages

The days where tuts had to add their source files at the end of the post, are behind us. Thanks to composer, bower, brew and others, we can now keep the packages up to date like real grown up developers.
This post basicly explains the same.

1 a) Install Node.js
Nodejs is basicly backend javascript. It’s lightweight and perfect for local builds of compile/distribution dependent projects. Since we’re taking this seriously, that’s just what we’ll do. Use brew for the install; if you haven’t installed Homebrew yet, do it first (you’ll thank me later). Add Node.js and npm with brew:

    # brew install node

1 b) Install Grunt.js
Grunt is a Task Runner, which will monitor, compress and compile the project. It is a marvelous beast, which makes you feel like you’re doing it for real. Install it with npm:

    # sudo npm install -g grunt-cli

1 c) Install Bower
Bower is the package manager we’ll be using today. More on bower.

    # npm install -g bower

Set up the project

For the project, we’ll be using a series of popular open source javascript packages: Require.js, Zepto or jQuery, Backbone.js with Underscore and Backgrid, Mustache and Bootstrap for some flavouring.

Move to your favorite location to create the project folder. For this post, we’ll use ~/superadmin as reference.

We’ll be using a src folder for your working files, a staging folder folder for your local and development environments and finally a dist folder for your staging and stable releases.
Go ahead and create them if you want.

2 a) Dependencies
Create the bower.json dependencies file.

{
	"name": "superadmin",
	"version": "0.0.1",
	"authors": [
		"Koen Betsens <koen@betsens.be>"
	],
	"dependencies": {
		"jquery": "*",
		"requirejs": "*",
		"underscore": "*",
		"backbone": "*",
		"mustache": "*",
		"bootstrap": "*",
		"backgrid": "*"
	}
}

You have to create a small bower configuration file called .bowerrc on your project root, with the following content:

{
  "directory" : "src/vendor"
}

We’ll also need a packages file for Grunt, who’ll be needing his own set of plugins. It’s named package.json:

{
	"name": "Project-superadmin",
	"version": "0.0.1",
	"title" : "Project Superadmin",
	"description": "Superadmin application using Backbone.js, Grunt & Bower over REST",
	"homepage": "",
	"bugs": "",
	"keywords": [],
	"private": true,
	"contributors": [
		"Koen Betsens <koen@betsens.be>"
	],
	"repository": {
		"type": "git",
		"url": "https://github.com/Project/superadmin"
	},
	"dependencies": {},
	"devDependencies": {
		"grunt-notify": "*",
		"grunt-newer": "*",
		"load-grunt-tasks": "*",
		"grunt-contrib-watch": "*",
		"grunt-contrib-clean": "*",
		"grunt-contrib-copy": "*",
		"grunt-contrib-concat": "*",
		"grunt-contrib-cssmin": "*",
		"grunt-contrib-csslint": "*",
		"grunt-contrib-jshint": "*",
		"grunt-contrib-uglify": "*",
		"grunt-concurrent": "*",
		"grunt-mustache": "*",
		"grunt-mustache-render": "*",
		"grunt-htmlhint": "*",
		"expect.js": "*"
	},
	"scripts": {
		"install": "bower install",
		"build": "grunt release",
		"update": "bower update; grunt release",
		"test": "grunt test:release",
		"stage": "grunt staging",
		"stage": "grunt stable"
	}
}

We now can run the install scripts and local grunt version, from the project folder:

    # npm install
    # npm install grunt --save-dev
    # bower install

2 b) Grunt task runner
The gruntfile takes care of JS sanity testing, compression of javascript and css files, concatinating of template files and the templating process of the html files. We’ll have a Grunt Watcher updating every save we make, and a Grunt Release on every deploy we send to the server.
Our ./Gruntfile.js is somewhat based on this tutorial. It will keep the compiled files readable for development purposes, and neatly packed for distribution.

module.exports = function (grunt)
{
	// load all grunt tasks
	require('load-grunt-tasks')(grunt);

	// Project configuration.
	grunt.initConfig(
	{
		pkg: grunt.file.readJSON('package.json'),
		
		defaults: {
			source: { dir: 'src' },
			staging: { dir: 'staging' },
			release: { dir: 'dist' }
		},
		
		/* Testing */
		jshint: {
			options: {
				asi: true, eqnull: true, jquery: true
			},
			source: ['<%= defaults.source.dir %>/js/**/*.js', '*.js', '!<%= defaults.source.dir %>/js/*-default.js']
		},
		
		/* Cleaning */
		clean: {
			staging: ['<%= defaults.staging.dir %>'],
			release: ['<%= defaults.release.dir %>']
		},
		
		/* Build files */
		mustache_render: {
			staging: {
				files:
				[{
					expand: true,
					cwd: '<%= defaults.source.dir %>/',
					src: '*.html',
					dest: '<%= defaults.staging.dir %>/',
					data: {
						title: '<%= pkg.title %>',
						description: '<%= pkg.description %>',
						version: '<%= pkg.version %>',
						files: {
							stylesheets: grunt.file.expand({cwd: 'src'}, 'css/**/*.css').map(function(path){ return {src: '/' + path}; }),
							scripts: grunt.file.expand({cwd: 'src'}, 'js/**/*.js').map(function(path){ return {src: '/' + path}; }),
							templates: '/js/templates.js'
						}
					}
				}]
			},
			release: {
				files:
				[{
					expand: true,
					cwd: '<%= defaults.source.dir %>/',
					src: '*.html',
					dest: '<%= defaults.release.dir %>/',
					data: {
						title: '<%= pkg.title %>',
						description: '<%= pkg.description %>',
						version: '<%= pkg.version %>',
						files: {
							stylesheets: [{src: '/css/styles-<%= pkg.version %>.min.css'}],
							scripts: [{src: '/js/superadmin-<%= pkg.version %>.min.js'}],
							templates: '/js/templates-<%= pkg.version %>.js'
						}
					}
				}]
			}
		},
		
		/* Compress files */
		cssmin: {
			combine: {
				files: {
					'<%= defaults.release.dir %>/css/styles-<%= pkg.version %>.min.css': [
						'<%= defaults.source.dir %>/css/**/*.css',
						'!*.combine.css',
						'!*.min.css'
					]
				}
			}
		},
		
		uglify: {
			release: {
				files: {
					'<%= defaults.release.dir %>/js/project-<%= pkg.version %>.min.js': [
						'<%= defaults.source.dir %>/js/**/*.js',
						'!*.min.js'
					]
				}
			}
		},
		
		/* Copy and concatinate files */
		copy: {
			watcher: {
				files: [
					{expand: true, cwd: '<%= defaults.source.dir %>', src: ['/*.html', '*.js','css/**/*.css','js/**/*.js'], dest: '<%= defaults.staging.dir %>/', filter: 'isFile'}
				]
			},
			staging: {
				files: [
					{expand: true, cwd: '<%= defaults.source.dir %>', src: ['*.json', '*.txt', '*.ico', '*.php', 'images/**','fonts/**','css/**','js/**','!js/**-default.js','storage/**'], dest: '<%= defaults.staging.dir %>/', filter: 'isFile'},
					{expand: true, cwd: '<%= defaults.source.dir %>/vendor', src: ['*/*.js','*/*.css','*/dist/**','*/lib/**',"!**/Gruntfile.js"], dest: '<%= defaults.staging.dir %>/js/lib'}
				]
			},
			release: {
				files: [
					{expand: true, cwd: '<%= defaults.source.dir %>', src: ['*.txt', '*.ico', '*.php', 'images/**','fonts/**','storage/**'], dest: '<%= defaults.release.dir %>/', filter: 'isFile'},
					{expand: true, cwd: '<%= defaults.source.dir %>/vendor', src: ['*/*.js','*/*.css','*/dist/**','*/lib/**',"!**/Gruntfile.js"], dest: '<%= defaults.release.dir %>/js/lib'}
				]
			}
		},
		
		mustache: {
			staging : {
				src: '<%= defaults.source.dir %>/templates/',
				dest: '<%= defaults.staging.dir %>/js/templates.js',
				options: {
					prefix: 'Templates = ',
					postfix: ';'
				}
			},
			release : {
				src: '<%= defaults.source.dir %>/templates/',
				dest: '<%= defaults.release.dir %>/js/templates-<%= pkg.version %>.js',
				options: {
					prefix: 'Templates = ',
					postfix: ';'
				}
			}
		},
		
		/* Balance processes */
		concurrent: {
			staging: ['mustache_render:staging', 'copy:staging', 'mustache:staging'],
			release: ['mustache_render:release', 'cssmin', 'copy:release', 'uglify', 'mustache:release'],
			watch: ['newer:mustache_render:staging', 'newer:copy:staging', 'mustache:staging'],
			test: ['jshint:source']
		},
		
		/* Watch the beast */
		watch: {
			options: {cwd: '<%= defaults.source.dir %>'},
			files: ['*.html', '*.js','css/**/*.css','js/**/*.js','templates/**/*.mustache'],
			tasks: ['concurrent:watch']
		}
	});
	
	// Register tasks
	grunt.registerTask('staging', ['concurrent:test', 'clean:staging', 'concurrent:staging']);
	grunt.registerTask('release', ['concurrent:test', 'clean:release', 'concurrent:release']);
	grunt.registerTask('watcher', ['watch']);
	grunt.registerTask('default', ['release']);
};

2 c) Config files
We’ll be working with app and authentication keys later on, so we need config files tailored for each environment. Be warned, this tends to be a bit tricky, where some developers tend to overwrite other’s local config’s in a repo environment.
Create the default file as ./src/js/config-default.js:

define({
	appid : "your-app-id",
	apiurl: "https://api.environment",
	authurl: "https://api.environment/auth"
});

As preventive measure, add this line to your .gitignore file before adding the config file:
src/js/config.js

Now copy the default as your local version:

    # cp src/js/config-default.js src/js/config.js

Since there’s quite a bit javascript flowing around, we’re going to use Require.js to only load what the project needs, when it’s needed. Actually, for complex Bootstrap model structures, there’s no ad hoc alternative.
We store the require.js config file in ./src/js/main.js:

/**
 * Require dependencies
 */
require.config(
{
	baseUrl: '/js/',
	paths: 
	{
		'jquery': 'lib/jquery/jquery',
		'underscore': 'lib/underscore/underscore',
		'backbone': 'lib/backbone/backbone',
		'bootstrap': 'lib/bootstrap/dist/js/bootstrap',
		'mustache': 'lib/mustache/mustache',
		'backgrid': 'lib/backgrid/lib/backgrid'
	},
	shim: 
	{
		'bootstrap': {
			deps: ['jquery'],
			exports: 'bootstrap'
		},
		'underscore': {
			exports: '_'
		},
		'backbone': {
			deps: ['underscore', 'jquery', 'mustache'],
			exports: 'backbone'
		},
		'backgrid': {
			deps: ['jquery','backbone','underscore'],
			exports: 'Backgrid'
		}
	}
});

/**
 * Set up the global project name	
 */
var Superadmin;

require(
	['backbone', 'bootstrap'],
	function(Backbone, bootstrap)
	{	
		// Start
	}
);

2 d) Create index and run
Since most of the code resides in the Bower, Grunt and Require packages, the index file is exceedingly elegant.
The ./src/index.html file contains:

<!DOCTYPE html>
<html lang="en">
	<head>
		<meta charset="utf-8">
		<meta name="description" content="{{description}}">
		<title>{{title}}</title>
		
		<link rel="icon" href="favicon.ico">
		{{#files.stylesheets}}
		<link href="{{{src}}}" rel="stylesheet">
		{{/files.stylesheets}}
		
		<!-- Vendor CSS -->
		<link href="/js/lib/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet">
		<link href="/js/lib/backgrid/lib/backgrid.css" rel="stylesheet">
	</head>

	<body class="dashboard">
	
		<div class="navbar navbar-inverse navbar-fixed-top" role="navigation">
			<div class="container-fluid">
				<div class="navbar-header">
					<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target=".navbar-collapse">
						<span class="sr-only">Toggle navigation</span>
						<span class="icon-bar"></span>
						<span class="icon-bar"></span>
						<span class="icon-bar"></span>
					</button>
					<a class="navbar-brand" href="/">
						Project Superadmin
					</a>
				</div>
			</div>
		</div>

		<div class="container-fluid">
			<div class="row">
				<div id="sidebar" class="col-sm-3 col-md-2 sidebar"></div>
				<div id="page" class="col-sm-9 col-sm-offset-3 col-md-10 col-md-offset-2 main"><h1>Hello World</h1></div>
			</div>
		</div>
		
		<!-- Templates -->
		<script src="{{{files.templates}}}" type="text/javascript"></script>
		
		<!-- Require -->
		<script data-main="/js/main" src="/js/lib/requirejs/require.js"></script>

	</body>
</html>

If everything went right, the only thing left to run is:

    # grunt staging

While you’re developing on the project, you might want to combine the grunt staging command with a running grunt watcher, to commit on-the-fly changes to your local staging file.

If you don’t have your local environment running on mac, this short post will help you out: a quick guide to setting up your local Mac Environment

Note

It is advised to keep your repo as clean as possible, by performing the actual grunt release when a deploy is finished on the target server. In order to make this work, you need a similar Nodejs & Grunt environment on your target machine, some .gitignore tweaks and a post-script running in your deploy software

.gitignore

src/js/config.js
src/vendor
node_modules
dist
staging

Dploy.io – Post-deployment commands

    # npm update
    # bower update
    # grunt release

Go on and give the Superadmin a swirl.


Sources:
http://shapeshed.com/setting-up-nodejs-and-npm-on-mac-osx/
http://coolestguidesontheplanet.com/install-gruntjs-osx-10-9-mavericks/
http://www.html5rocks.com/en/tutorials/tooling/supercharging-your-gruntfile/

layers-of-gals

3-Layer – Set up API

01/31/2015

The API is the gatekeeper of our structure. To keep intrusions at bay and prevent – complex – scaling of this layer, we’ll approach the install in a stoic manner.
The main goal of the API is the handling of json based request/response routing between the business logic and the app. Some static html files will be provided for the Authentication flow in a later post.

Laravel Framework

Laravel comes packed with API and template-friendly tools, making it a perfect php candidate for this post. Keep in mind that a lightweight router can be easily rebuilt in Nodejs once your project is stable – Laravel will help you a lot getting to that point, though.

1 a) Laravel Install
Go to your projects folder

    # cd ~/api

Start with installing Composer.

    # curl -sS https://getcomposer.org/installer | php
    # mv composer.phar /usr/local/bin/composer 

If you’re the first populating the API repo, install Laravel in a temp folder (outside the projects folder) following the install guide. Once finished, copy the files into your already cloned API repository. Make sure you don’t overwrite the git files, you may merge the contents of the readme files if you feel like.
Make sure you have (at least) the following records in your .gitignore file:

/vendor
/bootstrap/compiled.php
/app/storage
composer.phar
composer.lock
.DS_Store

1 b) Laravel dependencies
This counts both for “fresh” installs and cloned repos.
Make sure your app/storage directory and subdirectories are writable. If it’s empty, populate it:

    # mkdir app/storage
    # cd app/storage
    # mkdir cache logs meta sessions views
    # chmod -R 777 app/storage

Make sure you have php mcrypt installed. If not, good luck ( this worked for me, though).

Now you can run the following to populate the dependency packages:

    # composer install

1 c) View your install
If you haven’t set up your custom url environment in mac, read the quick guide.
If everything went well, you should see the Laravel start screen.

API Routes

Setting up Laravel as a API Router is breeze and already well documented in their documentation and numerous good tutorials, so we’ll skip to the actual “Hello World” demonstration.

2 a) Headers
Start with adding the correct headers in app/filters.php

App::before(function($request)
{
	# API Headers
	header('Access-Control-Allow-Origin: *');
	header('Access-Control-Allow-Methods: GET, PUT, POST, DELETE, OPTIONS');
	header('Access-Control-Allow-Headers: Origin, Content-Type, Accept, Authorization, X-Request-With');
	header('Access-Control-Allow-Credentials: true');
	header('Content-Type: application/json');
});

2 b) Routes versions
Let’s think ahead and make sure we can add multiple versions to our API. Since Laravel doesn’t mind if you customize the structure a bit, create a app/routes directory. Now create app/routes/routes-v1.php with following contents:

<?php
/**
 * Guest endpoints. No OAuth2 required
 */
Route::group (array('prefix'=> '1'), function() 
{
    # System
    Route::get('version', 'ExampleController@apiversion');
    
    # Hello
    Route::get('hello', 'ExampleController@hello');
    
});

And add the version file to your default routes.php file:

/**
 *	Get the version files
 */
include 'routes/routes-v1.php';

Now we just have to add the controller actions. Create app/controllers/ExampleController.php

<?php

class ExampleController extends BaseController
{
	/**
	 *	Hello World
	 */
	public function hello ()
	{
		# Default start of project
		return Response::json (["response"=> "Hello, World!"]);
	}
	
	
	/**
	 *	Get the API version
	 */
	public function apiversion ()
	{
		# Default version on config file
		return Response::json (Config::get ('app.version'));
	}
}

If your local server is set up correctly, http://project-api.local/1/hello should now return a nice formatted “Hello, World!” response.

Nginx note
Laravel does come with .htaccess out of the box, which doesn’t help your Nginx install a lot. Make sure your default file is set as proposed in the Server Setup post.


Sources:
http://theprogrammer.co.za/wp/2014/02/11/php-composer-installation-on-ubuntu-12-04-2/
http://craftcms.stackexchange.com/questions/1611/mcrypt-is-required-on-os-x-mavericks-10-9-4
https://www.digitalocean.com/community/questions/enable-mcrypt-extension-in-nginx
http://stackoverflow.com/questions/21091405/nginx-configuration-for-laravel-4

Newer Posts
Older Posts
App.net