5 Ways to Instantly Improve Your Angular Codebase

Angular is not that easy. It requires deep learning. Building easy to read and maintain apps is no doubt an art. This article shares 5 ways to improve your Angular codebase quality. It includes everything from naming your file, complicated topics such as redux to state management. Learn how you can use all these tips for improving the way you code your Angular apps.

Let’s Begin!

1. Follow the Rules

People choose Angular over other frameworks for its rules. Angular app framework is clear about how things are to be done. This means that it comes with certain rules of its own which are to be followed to create a uniform code base across organization. 

This approach is quite useful when working across cooperation borders. That’s because it helps the newcomers to gel into the team quickly due to familiarity with the code.

In other words, you need to follow Angular design guidelines to get the most out of its framework. This will not only add quality to your code but will also make your life a lot easier.

Given below are a set of rules which you may have already familiar with. Angular Style Guide

“We  love to do things our way! We don’t want to follow someone else’s rules!”

If you don’t want to follow Angular’s rules, then you should not choose it as your front-end framework. A number of frameworks are available to suit your expectations.  You won’t feel happy working with Angular.

Naming the Files 

Naming Files is one example of some of the Angular’s rules you have to follow. Files in Angular have a very particular theme, also known as the naming convention. Every file containing an angular-structure, like a component, a pipe or a module is named in this way:


So, if you want to create a component to display to customers, name it “customer.” The structure would be a component and the file extension which is either “.ts”, “.css” or “.html”.


The Angular-cli takes care of all this stuff. It uses ng-generate command to create a structure. The file created as a result follows the naming convention. Check this tutorial to learn more about angular-cli.

2. Group Code into Modules

Placing everything into the app-module is common among developers and messes up everything. Try to avoid it and use modules. 

Modules help to organize your code into small chunks. This makes it easy to read and find errors when troubleshooting it. In addition to the cosmetic advantage, you also get to increase the user-experience by downloading only those parts that require working. 

Read a guide on modules to learn about modules if you are unfamiliar with them. However, don’t structure your modules the way you want. This would only make things worse. Luckily, Angular has defined some ways to help you structure your apps into modules.

Feature Modules

Feature Modules are one of the categories of available modules in the Angular framework. As the name gives it away, they are used to include one specific feature. These modules are created in another folder with the feature name.

For instance, the feature module for the feature “feature” is included into a directory named feature. This module follows the naming convention shared above: feature.module.ts.

Why do you need feature modules?

They structure our code in a way that makes it easy to understand and read. They also mark different features. This helps in overcoming any confusion or potential bugs that are otherwise caused due to overlapping. 

Another benefit of the feature module is lazy loading. Lazy loading is a technique which helps in downloading only the required module to a client’s device. The other modules are not downloaded.

For instance, in case of an administrative section of a blog, it is unwise to serve that code to every user visiting that site.

This code is separated into the admin section and placed into a feature module. It is loaded with the help of lazy loading. When the user visits the site, he/she only downloads the code for the blog section when visiting the blog. The other JavaScript is only loaded when he/she visits other sections.

Core and Shared-Module

Feature modules encapsulate everything into a separate module. This way it wont be used in other parts of the application, without importing it. However, in some situations, it won’t make much sense.

Going back to the same example of the blog section, suppose we have to import the admin-module to use a simple utility-directive. This would make things quite confusing and also rule out the benefits of lazy loading. For this reason, Core and Shared modules are used. 

Shared Modules

  • Shared modules are used for pieces of your application that need to be used across several areas (features) of your application.
  • If a component is going to be re-used in several features, then it will be considered as a shared module.
  • Services and Pipes are usually stated as shared modules.
  • Shared modules provide a way to share common pieces to fill out feature module “sections”.

A text-formatting module is a good example of a shared module. It contains a bunch of pipes to format text in specific manner.

This module is then used by all the feature modules without breaking the encapsulation of the other modules.

Core Module

The feature and shared modules are not enough for covering our requirements. We also require another module to place the  once used app-wide services. These are encapsulated into CoreModule in a directory known as “core.”

We mention all our app-wide services used just once in this module. It is imported into the app-module.

This keeps our app-module nice and clean.

However, the core-module is not used only for services. Everything which is used app-wide but is not suitable for a shared module can be done in the core-module.

Loading spinners at the start of an app are a good example. They are not used anywhere in the app which is why creating an extra shared module is highly unsuitable for them.

Doing Private Services in Components

Usually the services in angular are provided on a global scope. But some are also provided at an application level. The global scope will only be helpful if the practice of the global-singleton-pattern is compulsory. For example if your service is responsible for storing things, you require one global instance. Otherwise, every component has its separate cache due to the scoped dependency injection in angular.

There are other services that do not need to be provided globally and are used by just one component. It’s better to provide that service inside of the component, instead of the service. Especially when that service is linked to that component.

Otherwise you would have to define the services in a module to make it accessible everywhere it may be needed.

This makes services related to features (feature-modules), which makes them easier to find and understand in the right context. This also enables benefits of lazy-loading capabilities. It also reduces the hazard of dead code.

3. Don’t Use Logic in Your Components

Keeping Logic outside your components is always a good idea. This also increases the quality of your code.

There are following reasons why you should keep your logic out of your components: 

  • Testing the user-interface and testing components testing is quite difficult in comparison to pure logic testing. This is why your business logic should be in a separate service. 
  • Secondly having your business logic in a separate service can help you write more effective tests efficiently and quickly. Other components can also use your logic,  when placed separately as a service. It helps to reuse more of the code and consequently write less of it.  Code that does not exist also increases the code-quality more.
  • Last but not the least the code becomes easy to read when you have logic in a separate file.


If we talk about the state, there are a lot of challenges that arise from each component having its own state. It confuses you and makes you lose track of which component is in which state fast. That can make fixing errors quite difficult and results in errors that no one wants to have. This could be a big problem especially in large applications.

4. Make Sure Your Async Code is Correct

As discussed above angular is a framework with strict rules to achieve code-consistency. Same is the case with asynchronous code. The angular team uses the rxjs library for all asynchronous functions. The library makes use of the observer-pattern.

Avoid Promises

RxJs somewhat joins its functionality with the standard JavaScript promise. Both are predestined to handle asynchronous code but rxjs is far better. The purported rxjs-observables can resolve to more than just one value. This means that they are multiple values, you will have to see.

You can also pass only one result to that stream, which creates an overlap with the promise. One question comes to mind in this situation. 

What should we use? The simple promise which allows us to use the TypeScript await operator? or should we use the powerful rxjs-observables? What if we use them both.

Here is my opinion;

I usually like the style of the await operator for promises, but according to my point of view, we should stick to the opinion of the framework. and that is to use rxjs everywhere.

Use rxjs everywhere.

We can see that, by observing at the angular HTTP-client, it yields rxjs-observables, even when it is clear,  A HTTP-call can never give you output in more than one response.

Joining it up will not be a good solution. That way you get different implementations which are also not compatible with each other within the application. This is not something you would want to do.

Using the Async Pipe

As stated above, rxjs-observables are a little complicated. Using them incorrectly can lead to serious bugs.

The most communal mistake I make is forgetting to unsubscribe from the observable. This not only causes memory leaks, but also results in unwanted calculations and changes in your application.

public result;
  ngOnInit() {
    this.http.get('').subscribe(result => {
      this.result = result;

But evading this mistake is easy in angular. You must use the angular async pipe. This pipe will inevitably unsubscribe from the observable, once the component is deleted.

public result$;
  ngOnInit() {
    this.result$ = this.http.get('');

and stick in the pattern to the observable using the async pipe:

<p>{{result$ | async}}</p>

This way the code looks simple and clean.

5. Use a Central State Management (Such as Redux)

As your app becomes larger, the code-quality can decline intensely. Hundreds of components, each having their own state not only become confusing but also becomes difficult to debug at the same time.

Centralized state management is the solution in all such conditions. What is a centralized state management? Centralized state management states that all of our application state is stored in one single location, instead of being dispersed all over the app. The overall state is controlled by one instance, that is the only one to make changes to the state. There are many advantages of this state management. 

Centralized state management is the solution in all such conditions. What is a centralized state management? Centralized state management states that all of our application state is stored in one single location, instead of being dispersed all over the app. The overall state is controlled by one instance, that is the only one to make changes to the state. There are many advantages of this state management. 

  • You don’t have to search for it. As it is all in one place, you don’t need to search through the component tree.
  • It’s easy to transfer between applications or limits the state to disk. It does not have to be obtained from several places, as it’s just one object. 
  • Problems like component to component communication are resolved by this, as well. They just react to state-changes.
  • Based on which form of the central state management you select; you also get nice features like time-travel-debugging (Redux/ngrx).

Should Redux/ngrx be used?

Again, there are different opinions about this out there. Here is my point of view;

According to my personal view I don’t think that everyone should begin re-writing their apps to include redux. Even if you start from scratch, I don’t think redux needs to be used in most cases.

It totally depends on the kind of application you want to build. Here are different conditions;

  • If you want to generate large applications with several components, developed by a large team then redux will be the best option.
  • In case of medium sized-applications, not larger than the average app available on the app store, working with around 10 people, Redux must be avoided. Thats because it comes with a variety of boilerplate code which would unnecessarily complicate your app.
  • It’s a big No in case of small apps.

Because in these medium and small size applications using redux would overcomplicate the code through its hundreds of boilerplate-files. I am not in favor of boilerplate code at all.

But there is a library that is under development and it provides zero boiler-plate code while working with redux and ngrx. Its called Angular-ngrx-data and is worth checking out.


I hope my 5 commendations on how to increase the quality of your angular code base will help you a lot. 

Share this article with your friends and colleagues and help them become a better angular developer.

Good Luck!

AWS Lambda with Node.js: Getting Started

AWS Lambda service is used for building event-driven applications that are highly scalable, but many people are unclear about using it. You may have heard about terms such as serverless, function-as-a-service or AWS Lambda. If you want to learn more about these terms, then you are in luck. This article shares details about AWS Lambda serverless and to build a scalable image processing app with AWS Lambda and Node.js .

Defining Serverless: An Overview

There was a time when everything online was hosted on physical machines known as servers. The servers were kept in server rooms and companies mostly built and looked after their own data centers. However, this required a lot of resources, time and cost. 

In recent years, a new technology known as cloud computing has emerged in the market.  Today, all types of applications can be hosted on it easily. This means that you don’t need a data center of your own.

You can easily deploy your applications on a cloud server in minutes in any part of the world. Yet, scaling, server provisioning and maintenance was a hectic task. Fortunately, Serverless, a new shift in cloud computing technology has emerged. This has resolved the issue of server provisioning, logging, monitoring and maintenance of the entire infrastructure. This helps you to break your business logic into small single-purpose functions and work on it:

Serverless takes away the responsibility of managing servers from you, but it still requires servers and is not completely Serverless. Amazon Web Services is one such entity that falls under this category. It takes care of servers.

What are Amazon Web Services?

Amazon Web Service or commonly referred as the AWS is a renowned name in the cloud computing industry. According to their statistics, AWS offers an extremely dependable, scalable and economical infrastructure. It also hosts hundreds of thousands of businesses around 190 countries in the world.  As per Canalys’ 2018 report, AWS owns a 32.6% market share which is greater than any other company.

With this fact established, let’s move on to teaching you something that will completely blow you away.

AWS Lambda Functions

Computing service provided by AWS is called Lambda. This helps you run your code without having to deal with the cloud servers. With the help of an event, a lambda function will be triggered and die down after execution. The Lambda function only performs one thing such as fetching or creating a blog post or sending an email.

3 Ways to create a Lambda Function on AWS:

  1. You can use AWS console, a web interface offered by AWS for accessing and managing their services. However, it takes a lot of time and effort to write an application from the console and hence it’s not a recommended option.
  2. AWS also provides  Cloud based IDE. you can write, run and debug your code from the browser through it.
  3. Lastly, you can always use your local development environment with any text editor. Deploy the code with a single command. This article explores this option.

Creating an AWS Account

You must have a AWS account to trigger a lambda function.The account requirements include an email address, phone number and a legitimate credit card. You can always opt for a free tier account by AWS which allows you to use almost all the AWS services without paying anything for a year.

Here are the steps for Account Setup:

  1. Visit the AWS console.
  2. Choose “Create a Free Account.”
  3. Enter your email address, choose a strong password, contact and credit card details. Make sure all the details entered are correct.
  4. Complete identity verification process via Amazon’s phone call.
  5. You will receive a 4-digit number on your computer screen. Enter it on your phone’s keypad.
  6. Choose the free plan.
  7. Welldone! You have signed up for a brand new AWS account.

Local Development Environment Set Up

This article tutorial uses serverless framework, a CLI tool written inNode.js to write and deploy Lambda functions. It is compatible with AWS, Microsoft Azure, Google Cloud Platform, Spotinst, Kubeless, IBM OpenWhisk and more.

It is also easy to install the Serverless framework. First, you require a Node.js runtime. Install Node.js 8.10 runtime version which is compatible with AWS Lambda. Also, make sure your local development environment is close to the production environment including the runtime.

If you already have other Node.js versions installed, make use of NVM to install Node.js 8.10 runtime. NVM also helps to switch between Node.js versions.

$ nvm install v8.10

For switching between Node.js versions, do this:

$ nvm use 8.10

After Node.js runtime is ready, you need to to install the Serverless framework:

$ npm install -g serverless

For checking the Serverless framework installation.

$ serverless --version

How to Create a Programmatic User on AWS

The Lambda function doesn’t live in your local environment permanently. It must be transferred into the AWS environment. This procedure is called deployment. Serverless framework requires a way to access AWS resources and deploy your Lambda functions. 

This requires a programmatic user account. This account does not log into AWS console. It provides access to AWS resources through API calls with the help of access keys that you will create next.

Steps to Create a Programmatic User Account

  1. Sign in to AWS console and choose the IAM user.

2. Select Add user to start the account creation process.

3. Type lambda-example-cli as username. Enable programmatic access by checking the checkbox and click on Next: permissions to proceed.

4. Select Attach existing policy directly and search Administrator access. Check the AdinistratorAccess box. The policy is an object that defines the permissions of a user, group or role.

5. Click on the Create user button to view this screen.

6. Download or copy a CSV file that has your access Key ID and access key secret. Keep this file safe. These access keys help  to make API calls. Anyone who gets it can make API calls and can control your AWS account.

7. Configure serverless CLI through AWS credentials in order to deploy the code.

serverless config credentials --provider aws --key <your_access_key_id> --secret <your_access_key_secret>

Let’s first create a simple hello world app with Lambda and Node.js to get started. After that we will create an advanced app that downloads an image from a URL, rescale it and upload it to AWS S3, a scalable object storage service.

Start by using the Serverless CLI tool: 

$ serverless create --template hello-world

If the above command runs successfully, you will be able to have two files with you.

├── handler.js
└── serverless.yml

We supplied the –template argument to let Serverless CLI know our choice of templates. There are dozens of templates the Serverless CLI tool supports. You can find them in this repository.

This command supplies the -template argument to indicate Serverless CLI about our templates choice. Serverless CLI supports a variety of templates available in a repository.


Handler.js is a Lambda function where you will make your logic:

'use strict';
module.exports.helloWorld = (event, context, callback) => {

It accepts three arguments: event, context, and a callback.


The event argument contains event data. There are different event types, and each often contains different attributes. Understanding how Lambda functions work can be a bit hard to grasp at first.

Event argument has event data. There are different event types and each type has attributes. The way Lambda functions work are a bit hard to understand at first. First thing you must know is that a Lambda function is triggered by a service and doesn’t run on its own.  Here is a list of services to invoke Lambda functions.


The context argument is used to pass the runtime parameter to Lambda function.


Callback argument is used to return responses to the caller.


Serverless.yml  has API definition and other resources. These are required by your application to work properly. The article covers S3 for storing images.

Make some changes to serverless.yml. Change the runtime property to nodejs8.10. Add a new property region to the provider object. This will deploy the app to the specified region (we will do it). However, specifying it is optional and AWS will use us-east-1 by default unless specified by us. However, always choose regions close to users in production due to latency.

service: serverless-hello-world
# The `provider` block defines where your service will be deployed
  name: aws
  runtime: nodejs8.10
  region: eu-west-1

Deploying the App

Deploy the app with a deploy argument. Enter the  following command from the console:

$ serverless deploy

You will see the result in your console on completion. Note the endpoint here, as it’s quite important.

api keys:
  GET -
  helloWorld: serverless-hello-world-dev-helloWorld

When you can access the endpoint in your browser, you will also see a request printed back to you. Pat yourself on the back. You have done your first Lambda app.

Going Advanced

Hello world app built previously was quite simple. Lets go a bit advance and build the image processing app discussed above.

You can start a new project or modify Hello World app.

Edit serverless.yml as follows:

# filename: serverless.yml
service: ImageUploaderService

# The `provider` block defines where your service will be deployed
  bucket: getting-started-lambda-example
  name: aws
  runtime: nodejs8.10
  region: eu-west-1
  stackName: imageUploader
    - Effect: "Allow"
        - "s3:PutObject" 
        - "arn:aws:s3:::${self:custom.bucket}/*" 

# The `functions` block defines what code to deploy
    handler: uploadImage.handler
    # The `events` block defines how to trigger the uploadImage.handler code
      - http:
        path: upload
        method: post
        cors: true
      Bucket: ${self:custom.bucket} 
      Type: "AWS::S3::Bucket"
        BucketName: ${self:custom.bucket}

The YAML file has a custom object and the bucket’s name of the bucket is defined here. You can choose a different bucket name, as you won’t be able to choose the same name I have used unless i delete it. According to the AWS documentation, “Amazon S3 bucket name is globally unique and the namespace is shared by all AWS accounts.” This means that you can not use the same bucks name after it’s created by a user through another AWS account in any AWS region until the bucket is deleted.

You will also see that we have renamed stackName as ImageUploader. A stack is a collection of AWS resources which one manages as a single unit. IamRoleStatement is also defined as global. Lambda function needs permission for accessing these AWS resources. In our case, we require permission for writing to S3 bucket. This permission is given in the IAM role statements.

Below Lambda function Upload Image, a new object named environment is added. This helps to set environment variables. We can get these via process.env object during execution. Note the handler’s name here.

We concluded it by defining the S3 bucket resource for storing images.

Adding npm packages

Dont start from scratch. Use your favorite npm packages in Lambda apps. They will be packaged with your functions on deployment.

Use uuid package to generate unique names for images and jimp for manipulating the uploaded images.Create a package.json file.

npm init

Answer the questions to get started. 

npm install jimp uuid

Update the handler’s function. Rename the function to UploadImage.js. It’s a good convention to name your function after its functionality.

// filename: uploadImage.js

"use strict";

const AWS = require("aws-sdk");
const uuid = require("uuid/v4");
const Jimp = require("jimp");
const s3 = new AWS.S3();
const width = 200;
const height = 200;
const imageType = "image/png";
const bucket = process.env.Bucket;

module.exports.handler = (event, context, callback) => {
    let requestBody = JSON.parse(event.body);
    let photoUrl = requestBody.photoUrl;
    let objectId = uuid();
    let objectKey = `resize-${width}x${height}-${objectId}.png`;

        .then(image => image.resize(width, height)
        .then(resizedBuffer => uploadToS3(resizedBuffer, objectKey))
        .then(function(response) {
            console.log(`Image ${objectKey} was uploaed and resized`);
            callback(null, {
                statusCode: 200, 
                body: JSON.stringify(response)
        .catch(error => console.log(error));

* @param {*} data
* @param {string} key
function uploadToS3(data, key) {
    return s3
            Bucket: bucket,
            Key: key,
            Body: data,
            ContentType: imageType

* @param {url}
* @returns {Promise}
function fetchImage(url) {

In the uploadImage.js we have used fetchimage method for getting the image from the URL. Read more about jimp package’s working in the readme file.

After you have rescaled the image, it’s time to store it in the S3 bucket with the help of putObject method in the AWS SDK.

How to log in AWS Lambda functions

Logging gives clarity about how the applications run in production. This saves time when troubleshooting a problem. There are different log aggregating services such as Retrace, AWS cloudwatch and Lambda that work well together.

AWS Lambda monitors functions on your behalf and shares metrics in a report through Amazon CloudWatch. The metrics include total requests, duration and error rates. In addition to logging and monitoring, you can also log an event with console.log from your code.

console.log('An error occurred')

The handler function (uploadImage.js) we log into AWS CloudWatch when an image is successfully processed and when an errors occurs.

Deploying and testing

Deploy the existing or a new app with this Serverless deploy command:

serverless deploy

This is the output which you will also get. Note the endpoint again.

  POST -
  UploadImage: ImageUploaderService-dev-UploadImage

Make a curl request to the endpoint, so that the image is downloaded from the URL, rescaled and stored to S3 bucket. Don’t forget to change the post endpoint to the one in your console.

curl -H "Content-type: application/json" -d '{"photoUrl":""}' ''

Check the logs in CloudWatch and images in S3 bucket.


You learned what AWS is and how to setup an AWS account with access keys. You also learned to build the hello world app using Lambda and Node.js running in the cloud. Lastly, you also learned to create a photo processing app using the Serverless framework! That’s a lot to cover for a beginner!

Move on and build your knowledge from there about the Serverless framework and how to test the Lambda functions. Check out “Serverless Local Development” by Gareth McCumskey, a serverless and web developer.

These resources provide a great learning path to understanding AWS Lambda with Node.js.

Remote Python Developer : Improve Your Python Development skill

Models are a core concept of the Django framework. According to Django’s design philosophies for models, we should be as explicit as possible with the naming and functionality of our fields, and ensure that we’re including all relevant functionality related to our model in the model itself, rather than in the views or somewhere else. If you’ve worked with Ruby on Rails before, these design philosophies won’t seem new as both Rails and Django implement the Active Record pattern for their object-relational mapping (ORM) systems to handle stored data.

In this post we’ll look at some ways to leverage these philosophies, core Django features, and even some libraries to help make our models better.

getter/setter/deleter properties

As a feature of Python since version 2.2, a property’s usage looks like an attribute but is actually a method. While using a property on a model isn’t that advanced, we can use some underutilized features of the Python property to make our models more powerful.

If you’re using Django’s built-in authentication or have customized your authentication using AbstractBaseUser, you’re probably familiar with the last_login field defined on the User model, which is a saved timestamp of the user’s last login to your application. If we want to use last_login, but also have a field named last_seen saved to a cache more frequently, we could do so pretty easily.

First, we’ll make a Python property that finds a value in the cache, and if it can’t, it returns the value from the database.


Note: I’ve slimmed the model down a bit as there’s a separate tutorial on this blog about specifically customizing the built-in Django user model.

The property above checks our cache for the user’s last_seen value, and if it doesn’t find anything, it will return the user’s stored last_login value from the model. Referencing <instance>.last_seen now provides a much more customizable attribute on our model behind a very simple interface.

We can expand this to include custom behavior when a value is assigned to our property (some_user.last_seen = some_date_time), or when a value is deleted from the property (del some_user.last_seen).

Now, whenever a value is assigned to our last_seen property, we save it to the cache, and when a value is removed with del, we remove it from the cache. Using setter and deleter is described in the Python documentation but is rarely seen in the wild when looking at Django models.You may have a use case like this one, where you want to store something that doesn’t necessarily need to be persisted to a traditional database, or for performance reasons, shouldn’t be. Using a custom property like the above example is a great solution.

In a similar use case, the python-social-auth library, a tool for managing user authentication using third-party platforms like GitHub and Twitter, will create and manage updating information in your database based on information from the platform the user logged-in with. In some cases, the information returned won’t match the fields in our database. For example, the python-social-auth library will pass a fullname keyword argument when creating the user. If, perhaps in our database, we used full_name as our attribute name then we might be in a pinch.

A simple way around this is by using the getter/setter pattern from above:

Now, when python-social-auth saves a user’s fullname to our model (new_user.fullname = 'Some User'), we’ll intercept it and save it to our database field, full_name, instead.

through model relationships

Django’s many-to-many relationships are a great way of handling complex object relationships simply, but they don’t afford us the ability to add custom attributes to the intermediate models they create. By default, this simply includes an identifier and two foreign key references to join the objects together.

Using the Django ManyToManyField through parameter, we can create this intermediate model ourselves and add any additional fields we deem necessary.

If our application, for example, not only needed users to have memberships within groups, but wanted to track when that membership started, we could use a custom intermediate model to do so.


In the example above, we’re still using a ManyToManyField to handle the relationship between a user and a group, but by passing the Membershipmodel using the through keyword argument, we can now add our joinedcustom attribute to the model to track when the group membership was started. This through model is a standard Django model, it just requires a primary key (we use UUIDs here), and two foreign keys to join the objects together.

Using the same three model pattern, we could create a simple subscription database for our site:

Here we’re able to track when a user first subscribed, when they updated their subscription, and if we added the code paths for it, when a user canceled their subscription to our application.

Using through models with the ManyToManyField is a great way to add more data to our intermediate models and provide a more thorough experience for our users without much added work.

Proxy models

Normally in Django, when you subclass a model (this doesn’t include abstract models) into a new class, the framework will create new database tables for that class and link them (via OneToOneField) to the parent database tables. Django calls this “multi-table inheritance” and it’s a great way to re-use existing model fields and structures and add your own data to them. “Don’t repeat yourself,” as the Django design philosophies state.

Multi-table inheritance example:

This example would create both vehicles_vehicle and vehicles_airplanedatabase tables,linked with foreign keys. This allows us to leverage the existing data that lives inside vehicles_vehicle, while adding our own vehicle specific attributes to each subclass, vehicle_airplane, in this case.

In some use cases, we may not need to store extra data at all. Instead, we could change some of the parent model’s behavior, maybe by adding a method, property, or model manager. This is where proxy models shine. Proxy models allow us to change the Python behavior of a model withoutchanging the database.


Proxy models are declared just like normal models. In our example, we tell Django that Honda is a proxy model by setting the proxy attribute of the Honda Meta class to True. I’ve added a property and a method stub example, but you can see we’ve added a custom model manager to our Honda proxy model.

This ensures that whenever we request objects from the database using our Honda model, we get only Car instances back where model= 'Honda'. Proxy models make it easy for us to quickly add customization on top of existing models using the same data. If we were to delete, create, or update any Car instance using our Honda model or manager, it would be saved into the vehicles_car database just as if we were using the parent (Car) class.

Wrap up

If you’re already comfortable working in Python classes, then you’ll feel right at home with Django’s models: inheritance, multiple inheritance, method overrides, and introspection. These models are all part of how the Django object-relational mapper was designed.

Multi-table inheritance and manually defining intermediate tables for SQL joins aren’t necessarily basic concepts, but are implemented simply with a bit of Django and Python knowhow. Being able to leverage features of the language and framework alongside one another is one of the reasons Django is a popular web framework.

For further reading, check out Django’s documentation topic for models.

Let’s Know How To Structure Large Flask Applications Step by Step


There are so many methods and conventions for structuring Python web applications. Although certain frameworks are shipped with tools (for scaffolding) to automate – and ease – the task (and the headaches), almost all solutions rely on packaging / modularization applications as the code-base gets distributed [logically] across related files and folders.

The minimalist web application development framework Flask, has its own – blueprints.

Here, we are going to see how to create an application directory, and structure it to work with re-usable components created with Flask’s blueprints. These pieces allow the maintenance and the development of application components greatly.


1. Flask: The Minimalist Application Development Framework

2. Our Choices In This Article

3. Preparing The System For Flask

  • Prepare The Operating System
  • Setting up Python, pip and virtualenv

4. Structuring The Application Director

  • Creating Application Folder
  • Creating A Virtual Environment
  • Creating Application Files
  • Installing Flask

5. Working With Modules And Blueprints (Components)

  • Module Basics
  • Module Templates

6. Creating The Application (,, etc.)

  • Edit using nano
  • Edit using nano

7. Creating A Module / Component

  • Step 1: Structuring The Module
  • Step 2: Define The Module Data Model(s)
  • Step 3: Define Module Forms
  • Step 4: Define Application Controllers (Views)
  • Step 5: Set Up The Application in “app/”
  • Step 6: Create The Templates
  • Step 7: See Your Module In Action

Flask: The Minimalist Application Development Framework

Flask is a minimalist (or micro) framework which refrains from imposing the way critical things are handled. Instead, Flask allows the developers to use the tools they desire and are familiar with. For this purpose, it comes with its own extensions index and a good amount of tools already exist to handle pretty much everything from log-ins to logging.

It is not a strictly “conventional” framework and relies partially on configuration files, which frankly make many things easier when it comes to getting started and keeping things in check.

Our Choices In This Article

As we have just been over in the previous section, Flask-way of doing things involves using the tools you are most comfortable with. In our article, we will be using – perhaps – the most common (and sensible) of choices in terms of extensions and libraries (i.e. database extraction layer). These choices will involve:

  • SQLAlchemy (via Flask-SQLAlchemy)
  • WTForms (via Flask-WTF)


Adds SQLAlchemy support to Flask. Quick and easy.

This is an approved extension.


Flask-WTF offers simple integration with WTForms. This integration includes optional CSRF handling for greater security.

This is an approved extension.

Preparing The System For Flask

Before we begin structuring a large Flask application, let’s prepare our system and download (and install) Flask distribution.

Note: We will be working on a freshly instantiated droplet running the latest version of available operating systems (i.e. Ubuntu 13). You are highly advised to test everything on a new system as well – especially if you are actively serving clients.

Prepare The Operating System

In order to have a stable server, we must have all relevant tools and libraries up-to-date and well maintained.

To ensure that we have the latest available versions of default applications, let’s begin with updates.

Run the following for Debian Based Systems (i.e. Ubuntu, Debian):

To get the necessary development tools, install “build-essential” using the following command:

Setting up Python, pip and virtualenv

On Ubuntu and Debian, a recent version of Python interpreter – which you can use – comes by default. It leaves us with only a limited number of additional packages to install:

  • python-dev (development tools)
  • pip (to manage packages)
  • virtualenv (to create isolated, virtual

Note: Instructions given here are kept brief. To learn more, check out our how-to article on pip and virtualenv: Common Python Tools: Using virtualenv, Installing with Pip, and Managing Packages.


pip is a package manager which will help us to install the application packages that we need.

Run the following commands to install pip:


It is best to contain a Python application within its own environment together with all of its dependencies. An environment can be best described (in simple terms) as an isolated location (a directory) where everything resides. For this purpose, a tool called virtualenv is used.

Run the following to install virtualenv using pip:

Structuring The Application Directory

We will use the exemplary name of LargeApp as our application folder. Inside, we are going to have a virtual environment (i.e. env) alongside the application package (i.e. app) and some other files such as “” for running a test (development) server and “” for keeping the Flask configurations.

The structure – which is given as an example below – is highly extensible and it is built to make use of all helpful tools Flask and other libraries offer. Do not be afraid when you see it, as we explain everything step by step by constructing it all.

Target example structure:

Creating Application Folders

Let’s start with creating the main folders we need.

Run the following commands successively to perform the task:

Our current structure:

Creating A Virtual Environment

Using a virtual environment brings with it a ton of benefits. You are highly suggested to use a new virtual environment for each one of of your applications. Keeping the virtualenv folder inside your application’s is a good way of keeping things in order and tidy.

Run the following to create a new virtual environment with pip installed.

Creating Application Files

In this step, we will form the basic application files before moving on to working with modules and blueprints.

Run the following to create basic application files:

Our current structure:

Installing Flask And Application Dependencies

Once we have everything in place, to begin our development with Flask, let’s download and install it using pip.

Run the following to install Flask inside the virtual environment env.

Note: Here we are downloading and installing Flask without activating the virtual environment. However, given that we are using the pip from the environment itself, it achieves the same task. If you are working with an activated environment, you can just use pip instead.

And that’s it! We are now ready to build a larger Flask application modularized using blueprints.

Working With Modules And Blueprints (Components)

Module Basics

At this point, we have both our application structure set up and its dependencies downloaded and ready.

Our goal is to modularize (i.e. create re-usable components with Flask’s blueprints) all related modules that can be logically grouped.

An example for this can be an authentication system. Having all its views, controllers, models and helpers in one place, set up in a way that allows reusability makes this kind of structuring a great way for maintaining applications whilst increasing productivity.

Target example module (component) structure (inside /app):

# Our module example here is called *mod_auth*
# You can name them as you like as long as conventions are followed

    |-- ..
    |-- .

Module Templates

To support modularizing to-the-max, we will structure the “templates” folder to follow the above convention and contain a new folder – with the same or a similar, related name as the module – to contain its template files.

Target example templates directory structure (inside LargeApp):

Creating The Application

In this section, we will continue on the previous steps and start with actual coding of our application before moving onto creating our first modularized component (using blueprints): mod_auth for handling all authentication related procedures (i.e. signing-in, signing-up, etc).

Edit “” using nano

Place the contents:

Save and exit using CTRL+X and confirm with with Y.

Edit “” using nano

Place the contents:

# Statement for enabling the development environment
DEBUG = True

# Define the application directory
import os
BASE_DIR = os.path.abspath(os.path.dirname(__file__))  

# Define the database - we are working with
# SQLite for this example
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(BASE_DIR, 'app.db')

# Application threads. A common general assumption is
# using 2 per available processor cores - to handle
# incoming requests using one and performing background
# operations using the other.

# Enable protection agains *Cross-site Request Forgery (CSRF)*

# Use a secure, unique and absolutely secret key for
# signing the data. 

# Secret key for signing cookies
SECRET_KEY = "secret"

Save and exit using CTRL+X and confirm with with Y.

Creating A Module / Component

This section is the first major step that defines the core of this article. Here, we will see how to use Flask’s blueprints to create a module (i.e. a component).

What’s brilliant about this is the offered portability and reusability of your code, combined with ease of maintenance – for which you will be thankful in the future as often it is quite bit of a struggle to come back and understand things as they were left.

Step 1: Structuring The Module

As we have set out to do, let us create our first module’s (mod_auth) directories and files to start working on them.

After these operations, this is how the folder structure should look like:

Step 2: Define The Module Data Model(s)

Place the below self-explanatory – exemplary – contents:

# Import the database object (db) from the main application module
# We will define this inside /app/ in the next sections.
from app import db

# Define a base model for other database tables to inherit
class Base(db.Model):

    __abstract__  = True

    id            = db.Column(db.Integer, primary_key=True)
    date_created  = db.Column(db.DateTime,  default=db.func.current_timestamp())
    date_modified = db.Column(db.DateTime,  default=db.func.current_timestamp(),

# Define a User model
class User(Base):

    __tablename__ = 'auth_user'

    # User Name
    name    = db.Column(db.String(128),  nullable=False)

    # Identification Data: email & password
    email    = db.Column(db.String(128),  nullable=False,
    password = db.Column(db.String(192),  nullable=False)

    # Authorisation Data: role & status
    role     = db.Column(db.SmallInteger, nullable=False)
    status   = db.Column(db.SmallInteger, nullable=False)

    # New instance instantiation procedure
    def __init__(self, name, email, password):     = name    = email
        self.password = password

    def __repr__(self):
        return '<User %r>' % ( 

Save and exit using CTRL+X and confirm with with Y.

Step 3: Define Module Forms

Place the below self-explanatory – exemplary – contents:

Save and exit using CTRL+X and confirm with with Y.

Step 4: Define Application Controllers (Views)

Place the below self-explanatory – exemplary – contents:

# Import flask dependencies
from flask import Blueprint, request, render_template, \
                  flash, g, session, redirect, url_for

# Import password / encryption helper tools
from werkzeug import check_password_hash, generate_password_hash

# Import the database object from the main app module
from app import db

# Import module forms
from app.mod_auth.forms import LoginForm

# Import module models (i.e. User)
from app.mod_auth.models import User

# Define the blueprint: 'auth', set its url prefix: app.url/auth
mod_auth = Blueprint('auth', __name__, url_prefix='/auth')

# Set the route and accepted methods
@mod_auth.route('/signin/', methods=['GET', 'POST'])
def signin():

    # If sign in form is submitted
    form = LoginForm(request.form)

    # Verify the sign in form
    if form.validate_on_submit():

        user = User.query.filter_by(

        if user and check_password_hash(user.password,

            session['user_id'] =

            flash('Welcome %s' %

            return redirect(url_for('auth.home'))

        flash('Wrong email or password', 'error-message')

    return render_template("auth/signin.html", form=form)

Save and exit using CTRL+X and confirm with with Y.

Step 5: Set Up The Application in “app/”

Place the contents:

# Import flask and template operators
from flask import Flask, render_template

# Import SQLAlchemy
from flask.ext.sqlalchemy import SQLAlchemy

# Define the WSGI application object
app = Flask(__name__)

# Configurations

# Define the database object which is imported
# by modules and controllers
db = SQLAlchemy(app)

# Sample HTTP error handling
def not_found(error):
    return render_template('404.html'), 404

# Import a module / component using its blueprint handler variable (mod_auth)
from app.mod_auth.controllers import mod_auth as auth_module

# Register blueprint(s)
# app.register_blueprint(xyz_module)
# ..

# Build the database:
# This will create the database file using SQLAlchemy

Save and exit using CTRL+X and confirm with with Y.

Step 6: Create The Templates

Place the contents:

{% macro render_field(field, placeholder=None) %}
{% if field.errors %}
{% elif field.flags.error %}
{% else %}
{% endif %}
    {% set css_class = 'form-control ' + kwargs.pop('class', '') %}
    {{ field(class=css_class, placeholder=placeholder, **kwargs) }}
{% endmacro %}

    <legend>Sign in</legend>
    {% with errors = get_flashed_messages(category_filter=["error"]) %}
    {% if errors %}
    {% for error in errors %}
    {{ error }}<br>
    {% endfor %}
    {% endif %}
    {% endwith %}

    {% if form.errors %}
    {% for field, error in form.errors.items() %}
    {% for e in error %}
    {{ e }}<br>
    {% endfor %}
    {% endfor %}
    {% endif %}
    <form method="POST" action="." accept-charset="UTF-8" role="form">
      {{ form.csrf_token }}
      {{ render_field(, placeholder="Your Email Address",
                                  autofocus="") }}
      {{ render_field(form.password, placeholder="Password") }}
        <input type="checkbox" name="remember" value="1"> Remember Me
      <a role="button" href="">Forgot your password?</a><span class="clearfix"></span>
      <button type="submit" name="submit">Sign in</button>

Save and exit using CTRL+X and confirm with with Y.

Step 7: See Your Module In Action

After having created our first module, it is time to see everything in action.

Run a development server using the

This will initiate a development (i.e. testing) server hosted at port 8080.

Visit the module by going to the URL:

Although you will not be able to login, you can see it in action by entering some exemplary data or by testing its validators.

Analysis and metrices collection:Kubernetes observability tutorial

This is our second post related to Kubernetes observability tutorial series, We will find out how to monitor all aspects of your applications running in Kubernetes, including the following:

We will use  Elastic Observability  for analysis of container metrices in Kibana by the use of  Metrics app and out-of-the-box dashboards.

Collection of metrices from Kubernetes

Same as the targets of the logs of Kubernetes , collection of metrics from Kubernetes can be a challenge for a few reasons:

  1. Running of kubernets on different host which must be monitored by some metrices such as CPU, memory, disk utilisation, and disk and network I/O.
  2. Own set of metrics are also produced by kubernets containers which are mini VMs
  3. As both applications can run on application server and database so they have their own reporting methods.

Monitoring of Kubernets deployment becomes complex when many technologies are used by the organizations for handling metrices. We can use Elastic Observability which combine the logs and APM data for analysis and visibility purpose.

Collection of K8s metrics  with Metricbeat

 Metricbeat is similar to Filebeat and is the only component we can use for collection of various metrics from the pods running in our Kubernetes cluster, as well as Kubernetes’ own cluster metrics. Its modules gives a quick and easy way for picking up metrics from various sources and shipping them to Elasticsearch as ECS-compatible events, which are then corelated to logs, uptime and APM data. Metricbeat is deployed on Kubernetes in following two ways:

  • A single pod can be used for collection of Kubernetes metrics. It uses kube-state-metrics for collection of cluster-level metrics.
  • Metricbeat can be deployed as a single instance by Daemonset. It is used for collecting metrices from pods which are deployed on some host.Metricbeat communicates with kubelet APIs for getting the components which are running on that host.It uses different method such as auto-detection for further asking the components for collection of technology-specific metrics.

Before you get started: The following tutorial has a Kubernetes environment setup. We’ve created a supplementary blog that walks you through the process of setting up a single-node Minikube environment with a demo application for running the rest of the activities.

Collection of host, Docker, and Kubernetes metrics

Every DaemonSet instance is used to collect host, Docker and Kubernetes metrics, are defined in following way in the YAML

config $HOME/k8s-o11y-workshop/metricbeat/metricbeat.yml:

System (host) metric configuration

system.yml: |-
  - module: system
    period: 10s
      - cpu
      - load
      - memory
      - network
      - process
      - process_summary
      - core
      - diskio
      # - socket
    processes: ['.*']
      by_cpu: 5      # include top 5 processes by CPU
      by_memory: 5   # include top 5 processes by memory
  - module: system
    period: 1m
      - filesystem
      - fsstat
    - drop_event.when.regexp:
        system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'

metric configuration of Docker

docker.yml: |-
  - module: docker
      - "container"
      - "cpu"
      - "diskio"
      - "event"
      - "healthcheck"
      - "info"
      # - "image"
      - "memory"
      - "network"
    hosts: ["unix:///var/run/docker.sock"]
    period: 10s
    enabled: true

metrics configuration of Kubernetes

it has metrics collection from pods deployed onto the host after communication with kubelet API:

kubernetes.yml: |-
  - module: kubernetes
      - node
      - system
      - pod
      - container
      - volume
    period: 10s
    host: ${NODE_NAME}
    hosts: ["localhost:10255"]
  - module: kubernetes
      - proxy
    period: 10s
    host: ${NODE_NAME}
    hosts: ["localhost:10249"]

check out the Metricbeat documentation for more information about Metricbeat modules and data behind the metricsets,

Collection of Kubernetes state metrics and events

A single instance is used for collection of kubernets metrices which are deployed to be joined with metrices API for monitering the changes in state of objects. This defines the config:


kubernetes.yml: |-
  - module: kubernetes
      - state_node
      - state_deployment
      - state_replicaset
      - state_pod
      - state_container
      # Uncomment this to get k8s events:
      - event
    period: 10s
    host: ${NODE_NAME}
    hosts: ["kube-state-metrics:8080"]

Autodiscovery of Metricbeat by the use of pod annotations

The deployment of Metricbeat DaemonSet can autodetect the components running in the pods and application of the modules of Metricbeat for collection of metrices which are specific to technology. Pod annotations can be used for enabling the autodiscovery , it indicates modules for specific configurations.This section of Metricbeat config enables Kubernetes-based autodiscovery.


   - type: kubernetes
     host: ${NODE_NAME}
     hints.enabled: true

There are two components used for autodiscovery:

  • NGINX definition $HOME/k8s-o11y-workshop/nginx/nginx.yml
      app: nginx
      co.elastic.metrics/module: nginx
      co.elastic.metrics/hosts: '${}:${data.port}'
  • MySQL definition $HOME/k8s-o11y-workshop/mysql/mysql.yml
      app: mysql
      co.elastic.metrics/module: mysql
      co.elastic.metrics/hosts: 'root:[email protected](${}:${data.port})/'

See the Metricbeat documentation for more information .

Collection of application metrics, Prometheus-style

The Spring Boot petclinic application shows a range of application-specific metrics which are exposed in a form that can be scraped by Prometheus. You can navigate to the application http endpoint at http://<public-ip>:30080/metrics/Prometheus to check how the metrics are reported. we will be using Metricbeat for collection of these metrics and using Elastic components for all our observability needs.

Here is an example of what our application reports:

This configuration hints in petclinic YAML deployment config to tell Metricbeat to collect these metrics using the Prometheus module.


      app: petclinic
      co.elastic.metrics/module: prometheus
      co.elastic.metrics/hosts: '${}:${data.port}'
      co.elastic.metrics/metrics_path: '/metrics/prometheus'
      co.elastic.metrics/period: 1m

Generally, Metricbeat can add up or replace the Prometheus server . If it is already deployed and using Prometheus server, Metricbeat can be used to send the metrics out of the server with the help of Prometheus Federation API, and provides visibility across multiple Prometheus servers, Kubernetes namespaces and clusters, enables correlation of Prometheus metrics with Logs, APM and uptime events. If a simplified monitoring architecture is needed then we can use Metricbeat for collecting Prometheus metrics and send them straight into Elasticsearch.

Enrichment of Metadata

The following processors help all the events collected by Metricbeat to be enriched by. 


  - add_cloud_metadata:
  - add_host_metadata:
  - add_kubernetes_metadata:
  - add_docker_metadata:

This helps in correlating the metrics with the hosts, Kubernetes pods, Docker containers, and cloud-provider infrastructure metadata and correlating with other pieces of observability puzzle, for example application performance monitoring data and logs.

Metrics in Kibana

The configuration of Metricbeat shows the following views in the Metrics app. . Kibana helps to search things with its search bar.It helps in filtering and zomming things when we are looking for some things. We have used only one host, see below:

Host infrastructure metrics

Docker infra and metrics (table view)

Kubernetes infra and metrics

Metrics explorer

Out-of-the-box Kibana dashboards

Metricbeat has many of pre-built Kibana dashboards which we can easily be added to your cluster with a single command. We can use these dashboards as it is, or as a point to start custom dashboards tailored in the way we want . See the below dashboards:







In this article, we saw a collecting application and Kubernetes metrics with Metricbeat. We can start monitoring our systems and infrastructure today. Sign up for a free trial of Elasticsearch Service on Elastic Cloud, or download the Elastic Stack and host it yourself. 

Once you have it all and running, check the availability of your hosts with uptime monitoring, and check the applications running on your hosts with Elastic APM. You will have an observable system, completely integrated with your new metrics cluster. If you run into any difficulty or have questions, jump over to our comments section— we’re here to help.

Challenges and Best Practices of Docker s Container security

In the recent years massive adoption rates of dockers have made the security an important point to consider for firms which are using these containers for the development and production of different things.Containers are complex when compared to virtual machines or other deployed technologies. The process to secure docker containers are also very complex.

We will take a view of docker security container and explain the reason behind the complexity of docker container. We will discuss the default environments for better security and practices to monitor containers for security.

Following is the complete guide for container security:

Challenges faced by dockers container security:

Many organisations used virtual machines or bare-metal servers before Docker to host applications. These technologies are quite simple when seen from a security perspective. When hardening your development and monitoring for security relevant events you need to focus on just two layers. As APIs, overlay networks or complex software defined storage configuration are not a major part of virtual machine or bare metal developments so you do not have to worry about these.

A typical Docker environment has many moving parts hence its security is much more complicated. Those moving parts include:

  • Probably you have multiple Docker container images and individual micro services will be hosted by each one of your containers. Also probably multiple intances of each imagine will be running at a time. Proper security and monitoring will be required for these intances and images.
  • To keep the containers and its host safe, the Docker daemon need to be secured.
  • Bare metal or virtual machine might be the host server.
  • Another layer to secure is service like ECS if you use it to host your containers.
  • Communication between containers is facilitated by APIs and Overlay networks.
  • Other storage system that exists externally from your containers is Data volume.

And if you are thinking that learning to secure Docker is tough because dockers security is undoubtely much more complex than any other security system.

Best practices of Docker container security:

Luckily we can overcome the challenges. this article is not a tiring guide to security of docker but you can use this official Docker documentation),as a reference. Below are some best practices:

#1 setting of reference quotes

One easy thing in docker is configuring of resource quotas. Resource quotas helps us to limit the memory amount and resources of cpu which is consumed by the container.

This is helpful for many reasons. It helps to keep the environment of docker efficient and saves one container from mixing with other system resources. It also increases the security by saving the container from using large space or resources so that it gets prevented from any harmful activity.

Resources quotas are easily set by use of commands. View this Docker documentation.

#2 Root should not be run

We all know the feeling when we are tired and dont want to get entangled in problems related to permission setting to get an application work properly so running in root is the only option left so you dont worry about issues related to permission restrictions.

if you are a beginner it is sometimes okay to use Docker testing environment but there is no reason good enough to let a Docker container run with roof permissions in production.

Because Docker doesn’t run containers as root by default so this is an easy docker security to be followed. So you don’t have to make amendments to prevent running as a root by default in a default configuration. letting a container as a root is a temptation that needs to be resisted as it is more convenient in some situations.

If you use kubernetes to orchestrate your containers for added Docker security, you can explicitly prevent containers from starting as root. We can use MustRunAsNonRoot directive in a pod security policy.

#3 Secure container registeries

Docker is powerful because of the container registeries.It makes it easy to set central repositories which helps us in downloading the container images.

Using the container registries is a security risk if one does not know the evaluation of the security constraints.We can use Docker Trusted Registry  which can be installed in the firewalls to eradicate the risk of viruses.

The registry can be accessed from the back of firewalls and we can limit the unknown access of uploading and downloading images from our registry. Using role based access can control explicitly of unknown users or access.It is nice to leave our registry open to others but it is useful only if it stops the access of viruses and harmful things.

#4 Use of trusted and secure images

We should be sure that the the images or  container images  we use are from a trusted source. This is obvious but there are many platforms from where we can download images and they might not be trusted or verified.

One should consider not using public container registries or try to use official trusted repositories, like the ones on Docker Hub.

One can use image scanning tools which help to identify harmful sources . Mostupper level containerhave embedded scanning tools. The ones like Clair.

#5 Identify the source of your code

Docker images contain some original code and packages from upstream sources. sometimes the image downloaded can come from a trusted registry, the image can have packages from untrusted sources. these unknown packages can be made up of code taken from multiple outside sources.

That is why analysis tools are important. Downloading the sources of the Docker images and scanning the code origin we can know if any of the code is from unknown sources.

#6 network security and API

As we have seen above Docker containers depend on APIs and networks for communication. It is important to make sure that your APIs and network architectures are secure and monitoring the APIs and network activity for any unusual activity must also be checked.

As APIs and networks are not a part of Docker and are resources of Dockers so steps for securing APIs and networks are not included in this article. But it is important to check the security of the sources.

In Conclusion

Docker is a complex concept and having no simple trick for maintaining Docker container security. But one has to think carefully about steps to secure your Docker containers, and strengthen your container environment at many levels. This is the only way to ensure that you can have all the benefits of Docker containers without having major security issues.

Using Jest for Unit Testing of Gatsby, Typescript and React Testing Library

The task to set up Jest and React Testing library for TDD using Gatsby is an easy one.Its tricky as i planned to use Typescripts in my test.

Firstly , i installed jestbabel-jest and babel-preset-gatsby ensuring the presence of babel preset(s) which can be used internally for Gatsby site.

npm install –save-dev jest babel-jest babel-preset-gatsby identity-obj-proxy tslint-react @types/jest

Configure Jest for checking its working with Gatsby

As Gatsby has its own babel configuration so we have to manually tell jest to use babel-jest. the gatsby website tells to create  jest.config.js file. look at the code below


const path = require(“path”)

module.exports = {
setupFilesAfterEnv: [
path.resolve(dirname, “./jest-configs/setup-test-env.js”) ], transform: { // “^.+\.(tsx?|jsx?)$”: “ts-jest”, “\.svg”: “/jest-configs/__mocks/svgTransform.js” ,
“^.+\.(tsx?|jsx?)$”: <rootDir>/jest-configs/jest-preprocess.js,
moduleNameMapper: {
// “\.svg”: ./jest-configs/__mocks__/file-mocks.js,
“\.svg”: <rootDir>/jest-configs/__mocks__/svgTransform.js,
“typeface-“: “identity-obj-proxy”, “.+\.(css|styl|less|sass|scss)$”: identity-obj-proxy, “.+\.(jpg|jpeg|png|gif|eot|otf|webp|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$”: <rootDir>/jest-configs/__mocks__/file-mocks.js, }, testPathIgnorePatterns: [node_modules, .cache, public], transformIgnorePatterns: [node_modules/(?!(gatsby)/), \\.svg], globals: { PATH_PREFIX: “, }, testRegex: “(/tests/.|\.(test|spec))\.(ts|tsx)$”,
moduleFileExtensions: [
collectCoverage: false,
coverageReporters: [


module.exports = {
process() {
return ‘module.exports = {};’;
getCacheKey() {
// The output is always the same.
return ‘svgTransform’;

The function of transform option is to tell Jest that all ts, tsx, js or jsx files should be transformed using a jest-preprocess.js file.


const babelOptions = {
presets: [

module.exports = require(“babel jest”).createTransformer(babelOptions)

some code should also be put in setup-test-env.js .
The Jest Configuration docs explains the setupFilesAfterEnv: .... configuration option.

A list of direction or path to modules which run some code to configure or set up the testing framework before each test.


import “@testing-library/jest-dom/extend-expect”

That should have Jest properly configured. Now, I’ll install the testing library and jest-dom as dev-dependencies with npm.

npm install –save-dev @testing-library/react @testing-library/jest-dom

Now run npx jestand now our code is good to go


Now i will write my first test and run it. I like TDD because it is fast. We can write test before writing code. A test should fail at the beginning Read this up.
Now i will create a folder named __tests__ in my root folder of project. Then i will create a file named  test.spec.tsx and paste this code in it.


import React from "react"
import { render } from "@testing-library/react"

// You have to write data-testid
const Title = () => <h1 data-testid="hero-title">Gatsby is awesome!</h1>

test("Displays the correct title", () => {
  const { getByTestId } = render(<Title />)
  // Assertion
  expect(getByTestId("hero-title")).toHaveTextContent("Gatsby is awesome!")
  // --> Test will pass

Run commands like Yarn or npm install if you get errors like.

Cannot find module 'react' from 'test.spec.tsx'
    > 1 | import React from "react"


I am very happy with this . I will just start out Typescript with React so this was a great deal of learning for me. I’ll put up more post about writing real code using TDD. stay tuned

Developing a GraphQL server in Next.js

Next.js  is thought of as a frontend React framework. It provides server-side contribution, built-in routing process and many features related to performance. As Next.js supports the  API routes, so it provides the backend and frontend to React, in the same package and setup.

We will learn in this article, how to use the API routes in setting up a GraphQL API inside the Next.js app. It starts with basic setup and some concepts of CORS, loading of data from Postgres using the Knex package, thus improves the performance by using DataLoader package and avoids costly queries of N+1.

you can view source code here.

Setting Next.js

To set up Next.js run the command npx create-next-app. You can install npx by this command  npm i -g npx , it installs it globally on your system.

An example can be used to setup Next.js with a GraphQL API:

 npx create-next-app --example api-routes-graphql.

 Addition of an API route

With Next.js setup, we’re going to add an API (server) route to our app. This is as easy as creating a file within the pages/api folder called graphql.js. For now, its contents will be:

export default (_req, res) =&gt; {

What we want to produce

Now we want to try loading data efficiently from our Postgres database:

  albums(first: 5) {
    artist {


  "data": {
    "albums": [
        "id": "1",
        "name": "Turn It Around",
        "year": "2003",
        "artist": {
          "id": "1",
          "name": "Comeback Kid"
        "id": "2",
        "name": "Wake the Dead",
        "year": "2005",
        "artist": {
          "id": "1",
          "name": "Comeback Kid"

GraphQL Basic setup

there are four steps to setup GraphSQL:

  1. Definition of type which helps in describing the schema of GraphSQL.
  2. Creation of resolvers: having ability of responding a query or any sudden change.
  3. creation of Apollo server.
  4. creation of handler which helps to join things to Next.js API for requesting the response from lifecycle.

Now we import the gql function from apollo-server-micro , we can define our defining types that describes the schema of GraphQL server.

import { ApolloServer, gql } from "apollo-server-micro";

const typeDefs = gql`
  type Query {
    hello: String!

As our schema is defined now , we can now write the code which will enable the server in answering the queries and sudden changes. This is called resolver and every field require function which will produce results. The resolver function gives result which align with the defined types.

The arguments received by resolver functions are:

  • parent: it is ignored on query level.
  • arguments: They would be passed in our resolver function and will allow us to access the field argument.
  • context: it is a global state and tells about the authenticated user or the global instance and dataLoader.
const resolvers = {
  Query: {
    hello: (_parent, _args, _context) => "Hello!"

Passing  typeDefs and resolvers to new instance of ApolloServer gets us up and running:

const apolloServer = new ApolloServer({
  context: () => {
    return {};

By using  apolloServer one can access a handler, helps in handling the request and response lifecycle. One more config which is needed to be exported, which stops the body of incoming HTTP requests from being parsed, and required for GraphQL to work correctly:

const handler = apolloServer.createHandler({ path: "/api/hello" });

export const config = {
  api: {
    bodyParser: false

export default handler;

Addition of CORS support

If we start or limit cross-origin requests by the use of CORS, we can add the micro-cors package to enable this:

import Cors from "micro-cors";

const cors = Cors({
  allowMethods: ["POST", "OPTIONS"]

export default cors(handler);

In the above case the cross-origin HTTP method is limited to POST and OPTIONS. It changes the default export to have the handler pass to cors function.

Postgres , Knex with Dynamic data

Complex coding can be boring in many ways… now is the time to load from existing database. So for this a setup is needed to be installed and the required package is:

yarn add knex pg.

Now create a  knexfile.js file, configure Knex so it helps in connecting to our database, The ENV variable is used for knowing the connection of database. You can have a look at the article setting up secrets. The local ENV variable looks like:

PG_CONNECTION_STRING=”postgres://[email protected]:5432/next-graphql”:

// knexfile.js
module.exports = {
  development: {
    client: "postgresql",
    connection: process.env.PG_CONNECTION_STRING,
    migrations: {
      tableName: "knex_migrations"

  production: {
    client: "postgresql",
    connection: process.env.PG_CONNECTION_STRING,
    migrations: {
      tableName: "knex_migrations"

Now we can create database to set our tables. Empty files can be created with commands like yarn run knex migrate.

exports.up = function(knex) {
  return knex.schema.createTable("artists", function(table) {
    table.string("name", 255).notNullable();
    table.string("url", 255).notNullable();

exports.down = function(knex) {
  return knex.schema.dropTable("artists");

exports.up = function(knex) {
  return knex.schema.createTable("albums", function(table) {
    table.string("name", 255).notNullable();


exports.down = function(knex) {
  return knex.schema.dropTable("albums");

with the existing tables , run the following insert statements in Postico to set few dummy records:

INSERT INTO artists("name", "url") VALUES('Comeback Kid', '');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Turn It Around', '2003');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Wake the Dead', '2005');

The last step is creation of a connection to our DB within the graphql.js file.

import knex from "knex";

const db = knex({
  client: "pg",
  connection: process.env.PG_CONNECTION_STRING

New resolvers and definitions

Now remove the  hello query and resolvers, and replace them with definitions for loading tables from the database:

const typeDefs = gql`
  type Query {
    albums(first: Int = 25, skip: Int = 0): [Album!]!

  type Artist {
    id: ID!
    name: String!
    url: String!
    albums(first: Int = 25, skip: Int = 0): [Album!]!

  type Album {
    id: ID!
    name: String!
    year: String!
    artist: Artist!

const resolvers = {
  Query: {
    albums: (_parent, args, _context) => {
      return db
        .orderBy("year", "asc")
        .limit(Math.min(args.first, 50))

  Album: {
    id: (album, _args, _context) =>,
    artist: (album, _args, _context) => {
      return db
        .where({ id: album.artist_id })

  Artist: {
    id: (artist, _args, _context) =>,
    albums: (artist, args, _context) => {
      return db
        .where({ artist_id: })
        .orderBy("year", "asc")
        .limit(Math.min(args.first, 50))

Now we can see that every single field is not defined for the resolvers. it is going to simply read an attribute from an object and defining the resolver for that field can be avoided. The id resolver can also be removed .

DataLoader and Avoiding N+1 Queries

There is an unknown problem with the resolvers used . The SQL query needes to be run for every object or with additional number of queries. There is a great article great article on this problem which helps to resollve the queries.

The step used firstly involves defining of the loader. A loader is used to collect IDs for loading in a single batch.

import DataLoader from "dataloader";

const loader = {
  artist: new DataLoader(ids =>
      .whereIn("id", ids)
      .then(rows => => rows.find(row => === id)))

 loader is passed  to our GraphQL resolvers, see below:

const apolloServer = new ApolloServer({
  context: () => {
    return { loader };

It allows us to update the  resolver to utilize the DataLoader:

const resolvers = {
  Album: {
    id: (album, _args, _context) =>,
    artist: (album, _args, { loader }) => {
      return loader.artist.load(album.artist_id);

The problem is solved as the end result shows a single query for database for loading all objects at once.. N+1 issue is resolved..


In this article, we were able to create a GraphQL server with CORS support, loading data from Postgres, and stomping out N+1 performance issues using DataLoader. Not bad for a day’s work! The next step might involve adding mutations along with some authentication to our app, enabling users to create and modify data with the correct permissions. Next.js is no longer just for the frontend (as you can see). It has first-class support for server endpoints and the perfect place to put your GraphQL API.

Failure of Monitor and production of slow GraphQL requests

GraphQL h some debugging features which debugs request and responses, without effecting the production app. If one can ensure request related to networks then third party services can be used. try LogRocket.

LogRocket is used for web apps like a DVR, it records everything that is happening on the site. Instead of finding the reason of problems , we can aggregate and report on problematic GraphQL requests to quickly understand the root cause.

We can also track Apollo client state and inspect GraphQL queries’ and instruments like key-value pairs.LogRocket to record baseline performance timings like loading time of page, timimg of first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.

Mobile Applications architecture -React Native VS Native

Having an iOS and android background i struggled working with react native. i had no experience of working in this environment before and i worked with mobile development previously. i shifted to this new environment to develop the mindset of tech skills as a web developer. i was still understanding the mobile ecosystems rules like low memory usage and having battery of users, UI and others.

After passing the boundary of JavaScript learning norms, the new pattern of React, and what React Native includes , i next made a move to understand the structuring of app. My aim was to understand the path that is needed to be followed in order to structure the apps. My goal was to have knowledge for native development, and compare the tech worlds.

Native World

Nowadays, for many people with native background , its not difficult for them to decide the architecture of their app. these people have set some good options like MVP, MVVM and VIPER (mostly iOS). These people have the following goals in their mind:

  • unrelating the supply from business logics
  • developing testable applications
  • structuring of the app in modules for defining responsibilities

There was no right way decided by google or apple for many years, but 2 years ago google described their take at Google I/O and published a detailed blog ( detailed blog post) about structuring of app according to them and added extensive frameworks to android to support them .

The logic behind this step was to separate screens into their native components, with UI, business logics, networks and storage, all in their own entities. Look at the following diagram:

React Native

Most decisions related to architecture comes from FED world as ract native is based on react. It can be difficult for the react native developers initially as the ecosystem is different and decomposed between the different solutions.

After viewing the architecture we come to know that the web is mostly aligned with Flux. The main goal behind this was to develop one directional and predictable flow of data for your app, so it gets easy to understand and debug.

Redux is the best known implementation of flux despite many other implementations .

the application state is kept in Playstore or other stores and any change in store changes the parts of app. Look at the picture below:

Image for post

the components separate the actions, actions in return do something and update the store which changes the components in a good way. In above diagram differences and similarities can be seen.

let us view the differences and understand their existence and process parallel to them. I use android most of the time but in iOS the fragments can be switched to view controller and have feels like home.

Smart Components = Fragments

As in react native we speak about screens and in react we speak in terms of components. These components are the building units of all screens. A screen has many components and most of them are dumb , some are smart and are connected with the store or query needed by the data.

Some screens have smart components and access their own store with their business logics. for :

Image for post

This example describes the addition of task screen in the to do list of the application. It contains many smart components like pick location which has its own state or store, user interface states and business logic storage and server API. This helps to develop each part even when there are different teams and have screen which is a dumb container.

Native equivalent:

The notion of composing your screen into different separate components is also present. Each smart component can be its own part in native. The screen is result of combination of several small components , each of them having its own logic and interactive environment. The logics or the states are taken out of different components, which takes us to the next step.

Actions + Store = ViewModel

This part of component is in charge of all the states of the components and logics related to business and presentation. Its the most important part as it helps in the implementation process. This is the main heart of the application.

By using Redux , we can dispatch the actions and have to wait till any change in store and we check the changes if they are appropriate or not. In each action we can request any network, ask queries related to storage and upgrade the stores. The object named store is global and not attached to any other component life-cycle.

Native Counterpart:

Actions are similar to calling methods in appropriate view models. Store state is in the view model . Any change in the state disturbs the UI and gives benefits of reactivity.

Frameworks like RXjava and RXswift makes reactivity very simple.

Some differences noted are listed below:

The state is private in ViewModel in native, while in Redux the store is global. It has an advantage that when one component can effect another component the UI remains consistent.

Some disadvantages include the leakage of state to other components due to which it can be misused and rely on inside details that should be kept hidden . Also when one component is to be shown on the screen more than once , so updating that store can have issues and screens effect each other causing difficulty in debugging errors . To remove such issues we should decide what must be kept in the store an what not.

Remote Server

No difference.

Async Storage ≠ Relational DB

Now here lies big differences.

In react we have to use key/value storage , in native we use relational DB solutions. Both approaches have their own advantages and disadvantages so people with DB background find it difficult to work with key/value storages and its hard for them in schema portion.


We can see that initially it seems like recat native was different from native development, but after viewing we came to know that many ideas are similar with small chnages.

While comparing the architectural patterns its seen that in the android trends and is called MVI which aims to bring the flux pattern to native development.

Also Redux is the most used library and most used state management , although there are others like Mobx which brings MVC back to the web.

For mobile developers with native knowledge should take start from react native path, as both have similar goals mainly SOLID principles of software engineering.

Implementation might have some differences ; some are more functional or reactive, some with boilerplate, but mainly they have same goals.

Remote Typescript Developer : GraphQL, TypeScript and PostgreSQL API


The most popular stack these days is GraphQl and Typescript. I used Vanilla JavaScript in one of my recent projects but I have used Typescript many times. I never used this but I followed a tutorial which helped me a lot so I thought of guiding others too. Before starting let us see:

This image has an empty alt attribute; its file name is nnnutbfl.jpg

Why GraphQL, TypeScript and PostgreSQL ?:

The description in our API is provided by GraphQL. It helps in understanding the needs of the clients and helps us while dealing with large amounts of data, as one can have all the data by running only one query.

Typescript is used as a superset of javascript. When javascript code takes more compliance time and becomes messier to reuse or maintain we can use typescript instead.

PostgreSQL is based on personal preference and is open-source. you can view the following link for more details.


  1. yarn NPM can be used
  2. node: v.10 or superior
  3. PostgreSQL = 12
  4. basic typescript knowledge

Structure of folder

project is structured in the following way:




  • Apollo server: it is an open-source Graphsql server maintained by the community. It works by using node.js and HTTP frameworks.
  • Objection: Sequelize can also be used but objection.js is better because it is an ORM that embraces SQL.


  • Webpack: Webpack can be used to compile JavaScript modules, node.js do not accept files like .gql or .graphql, that’s why we use Webpack. install the following
yarn add graphql apollo-server-express express body-parser objection pg knex

and some dependencies of dev:

yarn add -D typescript @types/graphql @types/express @types/node  graphql-tag concurrently nodemon ts-node webpack webpack-cli webpack-node-external


use command tsconfig

  "compilerOptions": {
  "target": "es5",                          /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'. */
    "module": "commonjs",                     /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', 'es2020', or 'ESNext'. */
                      /* Concatenate and emit output to single file. */
     "outDir": "dist",                        /* Redirect output structure to the directory. */
     "rootDir": "src",                       /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
    "strict": true,                           /* Enable all strict type-checking options. */
     "moduleResolution": "node",            /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */
 "skipLibCheck": true,                     /* Skip type checking of declaration files. */
    "forceConsistentCasingInFileNames": true  /* Disallow inconsistently-cased references to the same file. */
  "files": ["./index.d.ts"]


const path = require('path');
const {CheckerPlugin} = require('awesome-typescript-loader');
var nodeExternals = require('webpack-node-externals');

module.exports = {
  mode: 'production',
  entry: './src/index.ts',
  externals: [nodeExternals(),{ knex: 'commonjs knex' }],
  output: {
    path: path.resolve(__dirname, 'dist'),
    filename: 'bundle.js'
  resolve: {
    extensions: [ ".mjs",'.js', '.ts','.(graphql|gql)'],
    modules: [
            test: /\.(graphql|gql)$/,
            exclude: /node_modules/,
            loader: 'graphql-tag/loader'
            test: /\.ts$/,
            exclude: /node_modules/,
            loaders: 'awesome-typescript-loader'
    new CheckerPlugin(),

Hello, World example

add the following script to the package.json file:

     "dev": "concurrently \" nodemon ./dist/bundle.js \" \" webpack --watch\" "


import express, { Application } from 'express';
import {  ApolloServer , Config } from 'apollo-server-express';

const app: Application  = express();

const schema = `
    type User{
        name: String
    type Query {
const config : Config = {
    resolvers : {
                return { name:"WOnder"}
    introspection: true,//these lines are required to use the gui 
    playground: true,//   of playground


const server : ApolloServer = new ApolloServer(config);

    path: '/graphql'

    console.log("We are running on http://localhost:3000/graphql")

Server config

we will use, Executable schema from Graphql-tools. It allows us to generate GraphQLSchema and allow us to join the types or resolvers from a large number of files.


const config : Config = {
    schema:schema,// schema definition from schema/index.ts
    introspection: true,//these lines are required to use  
    playground: true,//     playground


const server : ApolloServer = new ApolloServer(config);

    path: '/graphql'


import { makeExecutableSchema} from 'graphql-tools';
import schema from './graphql/schema.gql';
import {user,pet} from './resolvers';

const resolvers=[user,pet];

export default makeExecutableSchema({typeDefs:schema, resolvers: resolvers as any});


Let’s see the database diagram including a registry of users and their pets.

Migration file

for the creation of a database in Postgres we use migration files of knew


module.exports = {
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
    pool: {
      min: 2,
      max: 10
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    timezone: 'UTC'
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
    pool: {
      min: 2,
      max: 10
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    timezone: 'UTC'
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
    pool: {
      min: 2,
      max: 10
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    timezone: 'UTC'

a first migration running the file will be created:

npx knex --knexfile ./src/database/knexfile.ts migrate:make -x ts initial

and the migration file seems like this

import * as Knex from "knex";

export async function up(knex: Knex): Promise<any> {
    return knex.schema.createTable('users',(table:Knex.CreateTableBuilder)=>{


export async function down(knex: Knex): Promise<any> {

press run for migration file

npx knex --knexfile ./src/database/knexfile.ts migrate:latest

now there are two tables in the database and we need models for each table to execute queries, src/database/models:

import {Model} from 'objection';
import {Species,Maybe} from '../../__generated__/generated-types';

import User from './User';

class Pet extends Model{
    static tableName = "pets";
    id! : number;
    name?: Maybe<string>;
    specie?: Maybe<Species>; 

    static jsonSchema ={

            name:{type:'string', min:1, max:255},
            specie:{type:'string',min:1, max:255},
            created_at:{type:'string',min:1, max:255}

    static relationMappings=()=>({
            join: {
                from: 'pets.owner_id',
                to: '',


export default Pet;


import {Model} from 'objection';
import {Maybe} from '../../__generated__/generated-types';
import Pet from './Pet';

class User extends Model{
    static tableName = "users";
    id! : number;
    full_name!: Maybe<string>;
    country_code! : Maybe<string>;

    static jsonSchema = {

            id: { type:'integer'},
            full_name:{type :'string', min:1, max :255},
            country_code:{type :'string', min:1, max :255},
            created_at:{type :'string', min:1, max :255}

    static relationMappings =()=>({
        pets: {
            relation: Model.HasManyRelation,
           modelClass: Pet,
            join: {
              from: '',
              to: 'pets.owner_id'

export default User;

now we instantiate Knex and provide the instance to Objection

import dbconfig from './database/config';
const db = Knex(dbconfig["development"]);



enum Species{

type User {
    id: Int!
    full_name: String
    country_code: String

type Pet {
    id: Int!
    name: String
    owner_id: Int!
    specie: Species

input createUserInput{
    full_name: String!
    country_code: String!

input createPetInput{
    name: String!
    owner_id: Int!
    specie: Species!

input updateUserInput{
    full_name: String
    country_code: String

input updatePetInput{
    name: String!

type Query{

type Mutation{

generating types

below packages are required for better type safety the resolvers :

 yarn add -D @graphql-codegen/cli @graphql-codegen/typescript @graphql-codegen/
typescript-resolvers @graphql-codegen/typescript-operations 

create the config file for generating types :

overwrite: true
schema: "http://localhost:3000/graphql"
documents: null
      - "typescript"
      - "typescript-resolvers"

add below script to packages.json :

"generate:types": "graphql-codegen --config codegen.yml"

when the server is up, then run :

yarn run generate:types

for the generation of types from Graphql read from here, it is highly suggested



import {Pet,User} from '../../database/models';
import {Resolvers} from '../../__generated__/generated-types';
import {UserInputError} from 'apollo-server-express';

const resolvers : Resolvers = {
        pet:async (parent,args,ctx)=>{
            const pet:Pet= await Pet.query().findById(;

             return pet;          
        pets: async (parent,args,ctx)=>{
            const pets:Pet[]= await Pet.query();

            return pets;

            const owner : User = await Pet.relatedQuery("owner").for(;

            return owner;
        createPet:async (parent,args,ctx)=>{
            let pet: Pet;
            try {
                 pet  = await Pet.query().insert({});
            } catch (error) {
                throw new UserInputError("Bad user input fields required",{
                    invalidArgs: Object.keys(args),
            return pet;
        updatePet:async (parent,{pet:{id,}},ctx)=>{
            const pet : Pet = await Pet.query()

            return pet;
        deletePet:async (parent,args,ctx)=>{
            const pet = await Pet.query().deleteById(;
            return "Successfully deleted"

export default resolvers;
import { Resolvers} from '../../__generated__/generated-types';
import {User,Pet} from '../../database/models';
import {UserInputError} from 'apollo-server-express';

interface assertion {
    [key: string]:string | number ;

type StringIndexed<T> = T & assertion;

const resolvers : Resolvers ={
        users: async (parent,args,ctx)=>{
            const users : User[] = await User.query();
            return users;
        user:async (parent,args,ctx)=>{
            const user :User = await await User.query().findById(;

           return user;
        pets:async (parent,args,ctx)=>{
            const pets : Pet[] = await User.relatedQuery("pets").for(;

            return pets;
        createUser:async (parent,args,ctx)=>{
            let user : User;
            try {
                user = await User.query().insert({...args.user});
            } catch (error) {
               throw new UserInputError('Email Invalido', {
                   invalidArgs: Object.keys(args),
            return user;
        updateUser:async (parent,{user:{id,}},ctx)=>{

            let user : User = await User.query().patchAndFetchById(id,data);

            return user;

        deleteUser:async (parent,args,ctx)=>{
            const deleted = await User.query().deleteById(;
            return "Succesfull deleted";


export default resolvers;

this will help to execute all the operations defined before


two errors can be seen

It’s not bad to have errors, I prefer not to have errors, after this the first error is resolved by splitting knexfile.ts then put the required configuration for Knex in a separate file.

const default_config = {
    client: 'pg',
    connection: {
        database: "db",
        user: "user",
        password: "password"
    pool: {
      min: 2,
      max: 10
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    timezone: 'UTC'
  interface KnexConfig {
    [key: string]: object;
  const config : KnexConfig = {

  export default config;
import config from './config';

module.exports= config["development"]

the second got resolved from importing from the schema and taking help from this useful post. now we should have to work on our own Graphql API


yay! now we have a GraphQL API. So we have learned generating types for Typescript from Graphql and solving issues. I hope you got help from this tutorial. I’ll be posting more soon. Give suggestions in the comment box. thankyou.