Switching from REST to Meteor, DDP and Apollo

After creating a bunch of webapplications with NodeJS and Express as a REST backend with Angular on the front-end, we had the opportunity to take on a client project that we would develop in MeteorJS. As described in a previous post, Meteor has some great advantages for simplicity of development, avoiding the tangle of node versions, vagrant vm provisioning, grunt/gulp tasks and other stuff that has to be done to get your application ready to be deployed on production.

Then came DDP: The real time api for web applications packaged with meteor. The publish subscribe protocol I’ve grown to both love and hate. DDP has served us great while creating reactiveness in our applications, such as immediately updating news feeds and realtime chats, but has also caused us a lot of pain in the performance hogness and uncacheableness of it all. In our heavily used data parts of the application we had to switch to regular cacheable HTTP requests to get the data to the multitude of users efficiently.

The recent introduction of Apollo has come as a natural successor to DDP. Taking advantage of the GraphQL query language designed by facebook, Apollo is a module that will replace the data layer of meteor, giving us both the benefits of (optional) realtime updating data and a flexible and uniform way of aggregating data from different resources, be it SQL, mongo or any other microservice that is able to provide us with data.

Please note that the current version of apollo is not operating on a websocket, like DDP used to do but is now using plain old HTTP and polling to query for data and updates. This allows us to cache our data again, and only have it be realtime when needed. We have found that the commoncase for data is that it should be fast, and that “realtimeness” is not always as necessary as a default.

Even though we are only just starting our first projects in GraphQL, we can already see that the way of resolving the necessary data on the backend yourself in a structured and predictable manner is helping us get data from any source to the client fast, in an aggregated way that was impossible when using REST.

using gitstats to get an overview of git activity in a repository

I was recently interested to get some more info on a git project that has been running for a long time. There is a tool that creates some nice stats such as busiest commit moments, amount of commits per user and others called gitstats. The documentation is a bit finicky, but you can easily install it via homebrew:

brew install gitstats

and run it from inside a git repository folder as follows:

gitstats . gitstats

now open the index.html inside the freshly created gitstats folder and watch the juicy git stats in glorious web1.0 fashion.

gitstats

Full document text search in nodejs using elasticsearch and elasticsearch mapper attachments plugin

In one of our recent meteor applications we included a full document search feature using elasticsearch. Elasticsearch creates an index of documents based on metadata and their plain text content. For this feature we needed to support PDF and office filetypes (doc, docx, pptx etc.) as well. To accomodate this, elasticsearch has a plugin called elasticsearch-mapper-attachments.

Because we wanted to use a docker image to run elasticsearch, we decided to extend the elasticsearch:2.3.3 image and add the plugin on top of it. The plugin takes care of transforming the documents into a plaintext format using apache tika. The plaintext of the document is then used to create a document in elasticsearch.

FROM elasticsearch:2.3.3

RUN bin/plugin install mapper-attachments

EXPOSE 9200 9300
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]

My colleague Bryan pushed it to dockerhub for anyone to use under the tag bryantebeek/elasticsearch-mapper-attachments:2.3.3. We can now provision our server with this docker image using ansible, and configure the volumes to ensure the indexes created by elasticsearch are persisted on the docker host disk.

---
- name: elasticsearch | docker | start/update elastiscsearch
  docker_container:
    name: elasticsearch
    image: bryantebeek/elasticsearch-mapper-attachments:2.3.3
    state: started
    restart_policy: always
    volumes:
      - /var/data/elasticsearch:/usr/share/elastiscsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
  tags: elasticsearch

The elasticsearch service is now running and ready to accept connections from node application code. Before we can index any documents, we have to create the index itself. We use the elasticsearch npm module to setup the connection to elasticsearch:

import elasticsearch from 'elasticsearch';

let client = new elasticsearch.Client({
    host: "localhost:9200",
    log: ["error", "warning"]
});

Now we can create an index, we call it “files” and set the file property to be of type “attachment” to trigger the use of the mapper plugin:

client.indices.create({index: 'files'})
.then(() => {
    // create a mapping for the attachment
    return elastic.client.indices.putMapping({
        index: 'files',
        type: 'document',
        body: {
            document: {
                properties: {
                    file: {
                        type: 'attachment',
                        fields: {
                            content: {
                                type: 'string',
                                term_vector: 'with_positions_offsets',
                                store: true
                            }
                        }
                    }
                }
            }
        }
    });
});

Whenever we now upload a document in the application, we read it into memory, transform it into base64 and use the same elasticsearch client to create a new entry in the “files” index:

const fileContents = fse.readFileSync('some/uploaded/filepath');
const fileBase64 = new Buffer(fileContents).toString('base64');

client.create({
    index: 'files',
    type: 'document',
    id: 'somefileid',
    body: {
        file_id: 'somefileid',
        file: {
            _content: fileBase64
        }
    }
})
.catch((err) => {
    console.error('Error while creating elasticsearch record', err);
});

The document is now added to elasticsearch, and ready to be retrieved in the result of a search query. When the user uses the search functionality, a query is sent to the elasticsearch client and the results returned to the front-end:

client.search({
    q: query,
    index: 'files'
}, (error, result) => {
    if (error) return done(error);
    console.log(result.hits);
});

The hits object in the result contains an array of hits sorted by search score, which can then be rendered as pleased!

Deploy different environment variable configurations with the same docker image using combine in ansible 2.0

As described in an earlier post about improving performance in meteor applications, we separated synchedcron jobs out of a meteor application docker image using two deployments of the same image using a special environment variable.

In the ansible file below we can see two ansible docker tasks that both run a specific docker image. The environment settings are loaded from the app.env groupvar. Using the ansible combine functionthat was introduced in ansible 2.0, we are now able to combine the configured groupvars with one special environment variable CRON_ENABLED, which is set to 0 or 1, depending on wether we are deploying a cron instance, or a webserver instance of the same docker container. The ansible when configuration ensures that only one of these 2 tasks is ran for each individual server.

- name: docker | start application
  docker:
    name: app
    image: "{{ docker.registry.organization }}/{{ docker.registry.project }}:{{ tag }}"
    username: "{{ docker.registry.username }}"
    email: "{{ docker.registry.email }}"
    password: "{{ docker.registry.password }}"
    state: reloaded
    restart_policy: always
    pull: always
    ports:
      - "{{ansible_eth1.ipv4.address}}:3000:3000"
    env: "{{ app.env|combine({'CRON_ENABLED': 0}) }}"
  when: "'appservers' in group_names"
  tags: app

- name: docker | start application
  docker:
    name: app
    image: "{{ docker.registry.organization }}/{{ docker.registry.project }}:{{ tag }}"
    username: "{{ docker.registry.username }}"
    email: "{{ docker.registry.email }}"
    password: "{{ docker.registry.password }}"
    state: reloaded
    restart_policy: always
    pull: always
    ports:
      - "{{ansible_eth1.ipv4.address}}:3000:3000"
    env: "{{ app.env|combine({'CRON_ENABLED': 1}) }}"
  when: "'cronservers' in group_names"
  tags: app

Building recordfairs.nl, a webscraper application in nodejs with postgres database

As I’ve begun with a modest vinyl collection hobby I’ve been visiting some local recordfairs. While searching for them online I have found numerous websites from different labels / brands / stores that list upcoming record fairs. After wading through all the different sites for a third time I thought: “wouldn’t it be great if there was just one big aggregated table of all these recordfairs in a nice and simple mobile friendly layout?” and with that, the idea of recordfairs.nl was born.

As I wanted the operational costs to be as low as possible, I’ve decided to host recordfairs on heroku using the free plan. I’m glad I was able to use nodejs and for the free database plan I had to use PostgreSQL. For the node part I generated a project using the express-generator, opted for some jade templates, used sequelize models for postgres integration and the ever-so-amazing cheerio for scraping the different websites with the data. The front-end is basic bootstrap with a header image of a recordplayer that I shot a while ago.

recordfairs desktop

As you might have expected, scraping the data, getting into a unified format and avoiding duplicate entries in the database was the most exciting part of this application. Especially the dates, for they were in different text formats on almost each of the sites, some even just listing the starting time and finishing time as hours with 2 digits.

The request-promise npm package was a great help to help make scraping the sites asynchronous and chain them all together. Each of the scrapers exposes a promise:

"use strict";

let cheerio = require('cheerio');
let S = require('string');
let moment = require('moment');
let rp = require('request-promise');
let Promise = require('bluebird');
let debug = require('debug')('myappdebugtag');

module.exports = function () {
    var options = {
        uri: 'http://scrapesource',
        transform: function (body) {
            return cheerio.load(body);
        }
    };

    return rp(options)
        .then(function($) {
            let dataTable = $('table').first();
            return dataTable.find('tbody tr').toArray().map(function(el) {
                let row = $(el);
                let dateInput = S(row.find('td:nth-child(2)').text()).trim().s
                if(!dateInput) return null;

                let date = moment(dateInput, 'D-MMM', 'nl');

                let timeInput = S(row.find('td:nth-child(6)').text()).trim().s
                let times = timeInput.split('-');
                let startDate;
                let endDate;
                if(timeInput && times.length > 0) {
                    let startTimeString = times[0].indexOf('.') > 0 ? times[0].replace('.',':') : times[0] + ':00';
                    let endTimeString = times[1].indexOf('.') > 0 ? times[1].replace('.',':') : times[1] + ':00';
                    startDate = moment(`${date.format('MM-DD-YYYY')} ${startTimeString}`, 'MM-DD-YYYY HH:mm').toDate();
                    endDate = moment(`${date.format('MM-DD-YYYY')} ${endTimeString}`, 'MM-DD-YYYY HH:mm').toDate();
                }
                let fair = {
                    date: date.toDate(),
                    startDate: startDate,
                    endDate: endDate,
                    city: row.find('td:nth-child(3)').text(),
                    country: row.find('td:nth-child(4)').text(),
                    location: S(row.find('td:nth-child(5)').text()).trim().s,
                    organiser: row.find('td:nth-child(7)').text(),
                    origin: 'scrapesource'
                }
                return fair;
            });
        })
        .catch(function(err) {
            console.log(err);
        });
}

I created an endpoint that initiates the scraping process, which imports and executes the above promise calls, and handles the resulting data in the then handler. Just before storing the data in the database, I check if there is already a fair for that date and city:

function handleFairs(fairs) {
    fairs.forEach(function(fair) {
        if(!fair) {
            return;
        }

        models.Fair.findOne({
            where: {
                date: fair.date,
                city: fair.city
            }
        }).then(function(existingFair) {
            if (existingFair) {
                console.log('existing found', existingFair.id);
            } else {
                models.Fair.create(fair).then( () => {
                    console.log('added fair', fair.city, fair.date)
                })
                .catch( (err) => {
                    console.log('an error occurred')
                });
            }
        })
    })
}

Pretty straightforward, but a fun experience nonetheless. As you can see on recordfairs.nl, the result isn’t perfect but it’s still a lot better than visiting all of the urls separately. As a small addition, I’ve also created a small cordova app for android using ionic that basically displays the same information and allows you to do a clientside search through the retrieved data, check it out for free: Record Fairs on the android playstore

recordfairs mobile

Wireless outside weather station using particle photon, arduino and nodejs

The particle photon is a pretty awesome little arduino device with onboard wifi. When I got my set Photons I decided to extend my Raspberry Pi Arduino Weather station with some outside temperature readings using the sensor that was packaged with the Particle maker kit I got. I wanted to mount the sensor out the window of my shed and have the sensor connect to the wifi to transmit its data.

Setting up the Particle was done through the app on the phone, there is good getting started documentation on that available. Through the build.particle.io web interface I loaded the following arduino/particle snippet that exposes the current outside temperature on a double variable temperature that will be retrievable through the Particle API later. The temperature value is read using the OneWire spark library that works pretty well with the “sealed, water proof version of the DS18B20” from the maker kit.

// Use this include for the Web IDE:
#include "OneWire.h"

// Use this include for Particle Dev where everything is in one directory.
// #include "OneWire.h"

// This library can be tested on the Core/Photon by running the below
// DS18x20 example from PJRC:

// OneWire DS18S20, DS18B20, DS1822 Temperature Example
//
// http://www.pjrc.com/teensy/td_libs_OneWire.html
//
// The DallasTemperature library can do all this work for you!
// http://milesburton.com/Dallas_Temperature_Control_Library

OneWire ds(D0);  // on pin D0 (a 4.7K resistor is necessary)

double temperature = 0.0;

void setup(void) {
  Serial.begin(57600);
  Spark.variable("temperature", &temperature, DOUBLE);
}

void loop(void) {
  byte i;
  byte present = 0;
  byte type_s;
  byte data[12];
  byte addr[8];
  float celsius, fahrenheit;

  if ( !ds.search(addr)) {
    Serial.println("No more addresses.");
    Serial.println();
    ds.reset_search();
    delay(250);
    return;
  }

  Serial.print("ROM =");
  for( i = 0; i < 8; i++) {
    Serial.write(' ');
    Serial.print(addr[i], HEX);
  }

  if (OneWire::crc8(addr, 7) != addr[7]) {
      Serial.println("CRC is not valid!");
      return;
  }
  Serial.println();

  // the first ROM byte indicates which chip
  switch (addr[0]) {
    case 0x10:
      Serial.println("  Chip = DS18S20");  // or old DS1820
      type_s = 1;
      break;
    case 0x28:
      Serial.println("  Chip = DS18B20");
      type_s = 0;
      break;
    case 0x22:
      Serial.println("  Chip = DS1822");
      type_s = 0;
      break;
    default:
      Serial.println("Device is not a DS18x20 family device.");
      return;
  }

  ds.reset();
  ds.select(addr);
  ds.write(0x44, 1);        // start conversion, with parasite power on at the end

  delay(1000);     // maybe 750ms is enough, maybe not
  // we might do a ds.depower() here, but the reset will take care of it.

  present = ds.reset();
  ds.select(addr);
  ds.write(0xBE);         // Read Scratchpad

  Serial.print("  Data = ");
  Serial.print(present, HEX);
  Serial.print(" ");
  for ( i = 0; i < 9; i++) {           // we need 9 bytes
    data[i] = ds.read();
    Serial.print(data[i], HEX);
    Serial.print(" ");
  }
  Serial.print(" CRC=");
  Serial.print(OneWire::crc8(data, 8), HEX);
  Serial.println();

  // Convert the data to actual temperature
  // because the result is a 16 bit signed integer, it should
  // be stored to an "int16_t" type, which is always 16 bits
  // even when compiled on a 32 bit processor.
  int16_t raw = (data[1] << 8) | data[0];
  if (type_s) {
    raw = raw << 3; // 9 bit resolution default
    if (data[7] == 0x10) {
      // "count remain" gives full 12 bit resolution
      raw = (raw & 0xFFF0) + 12 - data[6];
    }
  } else {
    byte cfg = (data[4] & 0x60);
    // at lower res, the low bits are undefined, so let's zero them
    if (cfg == 0x00) raw = raw & ~7;  // 9 bit resolution, 93.75 ms
    else if (cfg == 0x20) raw = raw & ~3; // 10 bit res, 187.5 ms
    else if (cfg == 0x40) raw = raw & ~1; // 11 bit res, 375 ms
    //// default is 12 bit resolution, 750 ms conversion time
  }
  celsius = (float)raw / 16.0;
  fahrenheit = celsius * 1.8 + 32.0;
  
  temperature = celsius;
  
  Serial.print("  Temperature = ");
  Serial.print(celsius);
  Serial.print(" Celsius, ");
  Serial.print(fahrenheit);
  Serial.println(" Fahrenheit");
}

I put the photon and wiring in the plastic case, plugged in a usb charger in the shed and fed the temperature through the windows of the shed:

photon in case weathersensor

The particle photon is now sending it’s temperature data to the particle cloud, and I can access these from my nodejs weather script on the raspberrypi using the PARTICLE_DEVICE_ID and PARTICLE_ACCESS_TOKEN from the Particle cloud api as follows:

function logOutsideTemperature() {
    var url = ['https://api.particle.io/v1/devices/',
        PARTICLE_DEVICE_ID,
        '/temperature?access_token=',
        PARTICLE_ACCESS_TOKEN].join('');
    request(url, function(error, response, body) {
        if (!error && response.statusCode == 200) {
            var temperatureFloat = JSON.parse(body).result;
            if (temperatureFloat == "-0.0625" || parseFloat(temperatureFloat) >100) return;
            var logEntry = new Date().toString() + ';' + temperatureFloat + '\n';
            fs.appendFile('temperatures-outside.txt', logEntry, function(err) {
                //
            });
        }
    });
}

For some reason, there are sometimes some strange celsius values popping up, which I’ve excluded using some simple checks. To keep the data management easy, I’m writing the temperature values to a simple text file, and retrieve the data in the webinterface the same way I did on the previous version of the raspweather node API. The weather graph now shows both outside and inside temperature:

temperatures graph

Translating Meteor apps: UI, user generated content and emails

So one of our meteor apps is fully translated, including all ui, notifications and emails being sent to the clients. We’ve made heavy use of tap-i18n to achieve this, along with meteorhacks:ssr to render dynamic blaze emailtemplates on the backend.

ui

tap-i18n makes it incredibly easy to translate ui using labels. We add translation files in an i18n folder in the root of the project, as simple json files:

{
  "activity-button-archive": "Mark as completed",
  "activity-button-edit": "Edit",
  "activity-button-next_page": "Next step",
  "activity-form-description-label": "Describe the activity"
}

Inside the blaze component, we then render the label using the tap-i18n helper with an _ like so:

{{# if activity.archived }}
    <a class="pu-button" data-activity-unarchive>{{_ 'activity-button-unarchive'}}</a>
{{ else }}
    <a class="pu-button" data-activity-archive>{{_ 'activity-button-archive'}}</a>
{{/ if }}

All thats left now, is configuring which language should be loaded in the client. As documented in the tapi18n documentation, this can be done in the following way while bootstrapping the application:

getUserLanguage = function () {
  // Put here the logic for determining the user language

  return "nl";
};

if (Meteor.isClient) {
  Meteor.startup(function () {
    Session.set("showLoadingIndicator", true);

    TAPi18n.setLanguage(getUserLanguage())
      .done(function () {
        Session.set("showLoadingIndicator", false);
      })
      .fail(function (error_message) {
        // Handle the situation
        console.log(error_message);
      });
  });
}

emails

To render translated emails and send them to users, we create separate emailtemplates with a predictable filename, e.g. reset_password.en.html and reset_password.nl.html with contents being translated versions of the same email, including placeholders:

<!-- private/emails/reset_password.en.html -->
<p>
    Hi {{ user.profile.name }}!

    You told us you wanted to reset your password. That's okay! Just click the
    link below:

    <a href="{{ url }}">{{ url }}</a>
</p>

Before we can use the meteorhacks:ssr module to render a template, we have to precompile the html template and assign it to a key that we can use later. We indexed all of the emails and locales in some variables and loop through these to pre-compile all of the templates:

// bootstrap.js
var locales = ['en', 'nl'];
var templates = [
    'reset_password',
    //... rest of the emailtemplates ...
];

SSR.compileTemplate(
    'email-' + type + '-' + locale,
    Assets.getText('private/emails/' + type + '.' + locale + '.html'),
);

When configuring the emails of the accounts package, we use the meteorhacks:ssr module to render the correct template with a data object containing the variable fields:

/**
 * Password Reset Email
 */
Accounts.emailTemplates.resetPassword.html = function(user, url) {
    return SSR.render('email-reset_password-' + User(user).getLocale(), {
        user: user,
        url: url.replace('/#', ''),
        baseUrl: Meteor.absoluteUrl()
    });
};

Upgrading old ruby 2.0.0 app to 2.2.4 for heroku deployment

So heroku urged me to update my old ruby project Wisdoms.nl from version 2.0.0 to 2.2.4. Should be as easy as updating the Gemfile with the new version of ruby:

ruby "2.2.4"

… but unfortunately it wasn’t. I kept getting errors about json 1.8.0 not being able to install, and that I had to get gem install json -v '1.8.0' running before I could finish the bundle update that would properly update the Gemfile.lock file.

After searching online for a while I found a number of suggestions about upgrading the version of json to 1.8.1. While that didn’t work, upgrading the version to 1.8.3 did the trick and allowed me to run a bundle update command without failures. I had added the json 1.8.3 gem line to my Gemfile to make it work:

ruby "2.2.4"
gem 'json', '1.8.3'

Sticky sessions loadbalancing for meteor using nginx-sticky-module-ng

We found that the third party nginx-sticky-module-ng plugin did quite a good job distributing the load among servers per specific user session. Unfortunately, this means you have to install nginx on the loadbalancer from source, and install the nginx-sticky-module-ng during the installation. We use the following ansible step to install nginx including the sticky module:

- name: nginx | install from source
  shell: |-
    wget http://nginx.org/download/nginx-1.8.0.tar.gz
    tar -xzf nginx-1.8.0.tar.gz
    cd nginx-1.8.0
    wget https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/get/1.2.6.tar.gz
    tar -xzf 1.2.6.tar.gz
    ./configure --prefix=/etc/nginx \
        --sbin-path=/usr/sbin/nginx \
        --conf-path=/etc/nginx/nginx.conf \
        --error-log-path=/var/log/nginx/error.log \
        --http-log-path=/var/log/nginx/access.log \
        --pid-path=/var/run/nginx.pid \
        --lock-path=/var/run/nginx.lock \
        --http-client-body-temp-path=/var/cache/nginx/client_temp \
        --http-proxy-temp-path=/var/cache/nginx/proxy_temp \
        --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
        --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
        --http-scgi-temp-path=/var/cache/nginx/scgi_temp \
        --user=www-data \
        --group=www-data \
        --with-http_ssl_module \
        --with-http_realip_module \
        --with-http_addition_module \
        --with-http_sub_module \
        --with-http_dav_module \
        --with-http_flv_module \
        --with-http_mp4_module \
        --with-http_gunzip_module \
        --with-http_gzip_static_module \
        --with-http_random_index_module \
        --with-http_secure_link_module \
        --with-http_stub_status_module \
        --with-mail \
        --with-mail_ssl_module \
        --with-file-aio \
        --with-http_spdy_module \
        --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' \
        --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' \
        --with-ipv6 \
        --add-module=nginx-goodies-nginx-sticky-module-ng-c78b7dd79d0d
    make
    checkinstall --install=no -y
    mkdir -p /var/cache/nginx /tmp/nginx /etc/nginx/sites-enabled
  args:
    chdir: /usr/src
  tags: nginx

The sticky secure module can then be enabled in your nginx config with the sticky secure tag in the nginx upstream configuration:

# nginx vhost file

proxy_cache_path /tmp/nginx/myappname levels=1:2 keys_zone=myappname:8m max_size=100m inactive=10m;

upstream myappname {
    sticky secure;

    server <host1iphere>:3000;
    server <host2iphere>:3000;
    server <host3iphere>:3000;
}

server {
    listen 443 ssl spdy;

    ## ... rest of nginx ssl config ...

    location / {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Nginx-Proxy true;

        proxy_http_version 1.1;
        proxy_redirect off;

        proxy_ignore_headers Set-Cookie;
        proxy_hide_header Cache-Control;

        proxy_cache myappname;
        proxy_cache_key $host$uri$is_args$args;
        proxy_cache_valid 200 1m;
        proxy_cache_bypass $http_cache_control;
        add_header X-Proxy-Cache $upstream_cache_status;

        add_header X-Upstream $upstream_addr;

        proxy_pass http://myappname;
    }
}

Scheduling philips hue lights commands using node cron tasks

After getting my first smart Philips Hue color ambiance starter kit as a present from the LifelyNL team I immediately started integrating the set into my home automation system.

First off, the bridge had to be integrated into my raspapi node application, to be able to access the API functionality from anywhere I have access to my home automation API. I used the node-hue-api npm module to easily configure my home bridge and send commands to the Hue lights. I created the following express routes that exposes some of the features through a nice rest api (the rest of the code can be found on github)

var express = require('express');
var router = express.Router();
var hue = require('node-hue-api');
var q = require('q');
var HueApi = hue.HueApi;
var lightState = hue.lightState;

var hostname = 'insert bridge ip here';
var username = 'insert bridge username here';
var api = new HueApi(hostname, username);

router.get('/lights', function(req, res) {
    api.lights(function(err, lights) {
        if (err) throw err;
        res.send(lights);
    });
});

router.get('/on', function(req, res) {
    api.setGroupLightState(0, {'on': true}) // provide a value of false to turn off
        .then(function(result) {
            res.send(result);
        })
        .fail(function(error) {
            res.send(error);
        })
        .done();
});


router.get('/off', function(req, res) {
    api.setGroupLightState(0, {'on': false}) // provide a value of false to turn off
        .then(function(result) {
            res.send(result);
        })
        .fail(function(error) {
            res.send(error);
        })
        .done();
});

// for the other routes, check my github page

module.exports = router;

Now the lights are switchable through the API which is great, but I’d like them to be automatically turned on when it gets dark, and have them turned off when it is time to go to sleep. It sounds weird but it actually works as a nice reminder at night to quit what we are doing and get some sleep ;)

I extended my raspschedule program, also running on the raspberrypi home server, to include the code that switches the lights on and off through the raspapi api on localhost:3000. By using the npm module suncalc I know exactly when sunset and sunrise are occuring for the current day. The cronjob is then scheduled for these times precisely, and only if the sun is not already shining at that moment!

var CronJob = require('cron').CronJob;
var request = require('request');
var suncalc = require('suncalc');
var moment = require('moment');

var apiUrl = 'http://localhost:3000/api';

var geolocation = {
    lat: 52,
    lng: 4
}

function lightsOn() {
    request(apiUrl + '/lights/on', function(error, response, body) {
    })
}

function lightsOff() {
    request(apiUrl + '/lights/off', function(error, response, body) {
    })
}

var lightsOnWeekdaysMorning = new CronJob({
    cronTime: '00 00 07 * * 1-5',
    onTick: function() {
        var times = suncalc.getTimes(new Date(), geolocation.lat, geolocation.lng)
        console.log("sunrise at: " + times.sunrise + ", triggered at: " + new Date());
        if (times.sunrise > new Date()) {
            lightsOn();
        }
    },
    start: true,
    timeZone: 'Europe/Amsterdam'
});

var lightsOffWeekdaysMorning = new CronJob({
    cronTime: '00 20 08 * * 1-5',
    onTick: function() {
        lightsOff();
    },
    start: true,
    timeZone: 'Europe/Amsterdam'
});

var lightsOnEvening = new CronJob({
    cronTime: '00 00 04 * * *',
    onTick: function() {
        var times = suncalc.getTimes(new Date(), geolocation.lat, geolocation.lng)
        console.log("sunset: " + times.sunset)
        console.log("scheduling for: " + moment(times.sunset).subtract(30, 'minutes').toDate())
        new CronJob(
            moment(times.sunset).subtract(30, 'minutes').toDate(), 
            function() {
                console.log("turning light on evening at: " + new Date())
                lightsOn();
            },
            function() {
                /* This function is executed when the job stops */
            },
            true,
            'Europe/Amsterdam'
        );
    },
    start: true,
    timeZone: 'Europe/Amsterdam'
});

var lightsOffWeekdaysEvening = new CronJob({
    cronTime: '00 00 22 * * 0-5',
    onTick: function() {
        lightsOff();
    },
    start: true,
    timeZone: 'Europe/Amsterdam'
});

var lightsOffWeekendEvening = new CronJob({
    cronTime: '00 00 01 * * 0,6',
    onTick: function() {
        lightsOff();
    },
    start: true,
    timeZone: 'Europe/Amsterdam'
});

For older posts, visit the Archive

Found a typo? Something is wrong in this documentation? Please fork and edit it!