Datamodels, API's and GraphQL in practice

Datamodels, API’s and GraphQL in practice

This talk was given on 12-03-2018 as a talk at the HvA CMD

From buzzwords to products, working with new technology at a software agency

The technology industry is overflowing with buzzwords. I’ll look into a set of these words and explain how we translate them into tangible solutions for our customers. Along the way I’ll explain what our development process looks like and show you how we use tech like react, reactnative, nodejs, docker and graphql in our versatile teams to deliver custom built software to our customers in all kinds of industries.

This talk was given on 22-02-2018 as a conference talk at the Tech020 2018

GraphQL, Apollo and optimistic UI updates

GraphQL has been a great replacement for REST API’s in recent projects at the Lifely office. It offers an amazing abstraction over the available datamodel and allows the frontend views to be extremely flexible. Apollo helps the React components get their required data through GraphQL endpoints and has some awesome UI optimizations that we’d love to share, after kickstarting you with some GraphQL/Apollo basics.

This talk was given on 20-02-2018 as a talk at the Frontend Forward meetup “APIs Anonymous” at the Voorhoede

The impact and glory of GraphQL in applications at Lifely

GraphQL has been an amazing technology for software agencies like Lifely in the past years. GraphQL gives us a lot of options in how to connect frontend applications to a plethora of databases, backends, API’s and legacy systems. Peter will share Lifely’s experiences switching over from REST to GraphQL, explain the different ways of using GraphQL in our projects and tell you more about the challenges we faced and how we solved them.

slides and sources can be found on github

This talk was delivered by Bryan te Beek (because I was ill myself) on 22-01-2017 as a talk at the GraphQL in production meetup at Lifely

ReactNative at Lifely

At Lifely, we have had a lot of experiences with hybrid multiplatform applications. We started with ionic, moved to cordova with angular, tried cordova with react but could never produce an experience that was on par with our quality standard. After discovering React Native we found out we could finally produce the apps that we wanted to produce all along, with the familiarity of javascript / typescript and the react paradigm.

In this presentation I’m giving an overview of our past and explain why reactnative is such a great platform for appbuilders to develop hybrid applications:

  • 1 codebase
  • 1 language expertise
  • 1 QA (business logic)
  • rich plugin/bridge ecosystem
  • hot code reload

the slides can be found on github

This talk was given during the lunchtalks at Egeniq on November 21th 2017.

Websockets and DDP in production

In this guestlecture I show an example of a realtime application we built using Meteor and DDP. Along the way I explain how the DDP realtime protocol works, what the data through websockets looks like and how to consume this API data with the ddp-client npm package.

The slides and materials can be found on github

This talk was given on 21-04-2017 as a guest lecture at the HvA CMD - Everything Web Minor for the course “Real-Time Web”

Switching from REST to Meteor, DDP and Apollo

After creating a bunch of webapplications with NodeJS and Express as a REST backend with Angular on the front-end, we had the opportunity to take on a client project that we would develop in MeteorJS. As described in a previous post, Meteor has some great advantages for simplicity of development, avoiding the tangle of node versions, vagrant vm provisioning, grunt/gulp tasks and other stuff that has to be done to get your application ready to be deployed on production.

Then came DDP: The real time api for web applications packaged with meteor. The publish subscribe protocol I’ve grown to both love and hate. DDP has served us great while creating reactiveness in our applications, such as immediately updating news feeds and realtime chats, but has also caused us a lot of pain in the performance hogness and uncacheableness of it all. In our heavily used data parts of the application we had to switch to regular cacheable HTTP requests to get the data to the multitude of users efficiently.

The recent introduction of Apollo has come as a natural successor to DDP. Taking advantage of the GraphQL query language designed by facebook, Apollo is a module that will replace the data layer of meteor, giving us both the benefits of (optional) realtime updating data and a flexible and uniform way of aggregating data from different resources, be it SQL, mongo or any other microservice that is able to provide us with data.

Please note that the current version of apollo is not operating on a websocket, like DDP used to do but is now using plain old HTTP and polling to query for data and updates. This allows us to cache our data again, and only have it be realtime when needed. We have found that the commoncase for data is that it should be fast, and that “realtimeness” is not always as necessary as a default.

Even though we are only just starting our first projects in GraphQL, we can already see that the way of resolving the necessary data on the backend yourself in a structured and predictable manner is helping us get data from any source to the client fast, in an aggregated way that was impossible when using REST.

using gitstats to get an overview of git activity in a repository

I was recently interested to get some more info on a git project that has been running for a long time. There is a tool that creates some nice stats such as busiest commit moments, amount of commits per user and others called gitstats. The documentation is a bit finicky, but you can easily install it via homebrew:

brew install gitstats

and run it from inside a git repository folder as follows:

gitstats . gitstats

now open the index.html inside the freshly created gitstats folder and watch the juicy git stats in glorious web1.0 fashion.

gitstats

Full document text search in nodejs using elasticsearch and elasticsearch mapper attachments plugin

In one of our recent meteor applications we included a full document search feature using elasticsearch. Elasticsearch creates an index of documents based on metadata and their plain text content. For this feature we needed to support PDF and office filetypes (doc, docx, pptx etc.) as well. To accomodate this, elasticsearch has a plugin called elasticsearch-mapper-attachments.

Because we wanted to use a docker image to run elasticsearch, we decided to extend the elasticsearch:2.3.3 image and add the plugin on top of it. The plugin takes care of transforming the documents into a plaintext format using apache tika. The plaintext of the document is then used to create a document in elasticsearch.

FROM elasticsearch:2.3.3

RUN bin/plugin install mapper-attachments

EXPOSE 9200 9300
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]

My colleague Bryan pushed it to dockerhub for anyone to use under the tag bryantebeek/elasticsearch-mapper-attachments:2.3.3. We can now provision our server with this docker image using ansible, and configure the volumes to ensure the indexes created by elasticsearch are persisted on the docker host disk.

---
- name: elasticsearch | docker | start/update elastiscsearch
  docker_container:
    name: elasticsearch
    image: bryantebeek/elasticsearch-mapper-attachments:2.3.3
    state: started
    restart_policy: always
    volumes:
      - /var/data/elasticsearch:/usr/share/elastiscsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
  tags: elasticsearch

The elasticsearch service is now running and ready to accept connections from node application code. Before we can index any documents, we have to create the index itself. We use the elasticsearch npm module to setup the connection to elasticsearch:

import elasticsearch from 'elasticsearch';

let client = new elasticsearch.Client({
    host: "localhost:9200",
    log: ["error", "warning"]
});

Now we can create an index, we call it “files” and set the file property to be of type “attachment” to trigger the use of the mapper plugin:

client.indices.create({index: 'files'})
.then(() => {
    // create a mapping for the attachment
    return elastic.client.indices.putMapping({
        index: 'files',
        type: 'document',
        body: {
            document: {
                properties: {
                    file: {
                        type: 'attachment',
                        fields: {
                            content: {
                                type: 'string',
                                term_vector: 'with_positions_offsets',
                                store: true
                            }
                        }
                    }
                }
            }
        }
    });
});

Whenever we now upload a document in the application, we read it into memory, transform it into base64 and use the same elasticsearch client to create a new entry in the “files” index:

const fileContents = fse.readFileSync('some/uploaded/filepath');
const fileBase64 = new Buffer(fileContents).toString('base64');

client.create({
    index: 'files',
    type: 'document',
    id: 'somefileid',
    body: {
        file_id: 'somefileid',
        file: {
            _content: fileBase64
        }
    }
})
.catch((err) => {
    console.error('Error while creating elasticsearch record', err);
});

The document is now added to elasticsearch, and ready to be retrieved in the result of a search query. When the user uses the search functionality, a query is sent to the elasticsearch client and the results returned to the front-end:

client.search({
    q: query,
    index: 'files'
}, (error, result) => {
    if (error) return done(error);
    console.log(result.hits);
});

The hits object in the result contains an array of hits sorted by search score, which can then be rendered as pleased!

Deploy different environment variable configurations with the same docker image using combine in ansible 2.0

As described in an earlier post about improving performance in meteor applications, we separated synchedcron jobs out of a meteor application docker image using two deployments of the same image using a special environment variable.

In the ansible file below we can see two ansible docker tasks that both run a specific docker image. The environment settings are loaded from the app.env groupvar. Using the ansible combine functionthat was introduced in ansible 2.0, we are now able to combine the configured groupvars with one special environment variable CRON_ENABLED, which is set to 0 or 1, depending on wether we are deploying a cron instance, or a webserver instance of the same docker container. The ansible when configuration ensures that only one of these 2 tasks is ran for each individual server.

- name: docker | start application
  docker:
    name: app
    image: "{{ docker.registry.organization }}/{{ docker.registry.project }}:{{ tag }}"
    username: "{{ docker.registry.username }}"
    email: "{{ docker.registry.email }}"
    password: "{{ docker.registry.password }}"
    state: reloaded
    restart_policy: always
    pull: always
    ports:
      - "{{ansible_eth1.ipv4.address}}:3000:3000"
    env: "{{ app.env|combine({'CRON_ENABLED': 0}) }}"
  when: "'appservers' in group_names"
  tags: app

- name: docker | start application
  docker:
    name: app
    image: "{{ docker.registry.organization }}/{{ docker.registry.project }}:{{ tag }}"
    username: "{{ docker.registry.username }}"
    email: "{{ docker.registry.email }}"
    password: "{{ docker.registry.password }}"
    state: reloaded
    restart_policy: always
    pull: always
    ports:
      - "{{ansible_eth1.ipv4.address}}:3000:3000"
    env: "{{ app.env|combine({'CRON_ENABLED': 1}) }}"
  when: "'cronservers' in group_names"
  tags: app

For older posts, visit the Archive

Found a typo? Something is wrong in this documentation? Please fork and edit it!