Category

JavaScript

Challenges and Best Practices of Docker s Container security

In the recent years massive adoption rates of dockers have made the security an important point to consider for firms which are using these containers for the development and production of different things.Containers are complex when compared to virtual machines or other deployed technologies. The process to secure docker containers are also very complex.

We will take a view of docker security container and explain the reason behind the complexity of docker container. We will discuss the default environments for better security and practices to monitor containers for security.

Following is the complete guide for container security:

Challenges faced by dockers container security:

Many organisations used virtual machines or bare-metal servers before Docker to host applications. These technologies are quite simple when seen from a security perspective. When hardening your development and monitoring for security relevant events you need to focus on just two layers. As APIs, overlay networks or complex software defined storage configuration are not a major part of virtual machine or bare metal developments so you do not have to worry about these.

A typical Docker environment has many moving parts hence its security is much more complicated. Those moving parts include:

  • Probably you have multiple Docker container images and individual micro services will be hosted by each one of your containers. Also probably multiple intances of each imagine will be running at a time. Proper security and monitoring will be required for these intances and images.
  • To keep the containers and its host safe, the Docker daemon need to be secured.
  • Bare metal or virtual machine might be the host server.
  • Another layer to secure is service like ECS if you use it to host your containers.
  • Communication between containers is facilitated by APIs and Overlay networks.
  • Other storage system that exists externally from your containers is Data volume.

And if you are thinking that learning to secure Docker is tough because dockers security is undoubtely much more complex than any other security system.

Best practices of Docker container security:

Luckily we can overcome the challenges. this article is not a tiring guide to security of docker but you can use this official Docker documentation),as a reference. Below are some best practices:

#1 setting of reference quotes

One easy thing in docker is configuring of resource quotas. Resource quotas helps us to limit the memory amount and resources of cpu which is consumed by the container.

This is helpful for many reasons. It helps to keep the environment of docker efficient and saves one container from mixing with other system resources. It also increases the security by saving the container from using large space or resources so that it gets prevented from any harmful activity.

Resources quotas are easily set by use of commands. View this Docker documentation.

#2 Root should not be run

We all know the feeling when we are tired and dont want to get entangled in problems related to permission setting to get an application work properly so running in root is the only option left so you dont worry about issues related to permission restrictions.

if you are a beginner it is sometimes okay to use Docker testing environment but there is no reason good enough to let a Docker container run with roof permissions in production.

Because Docker doesn’t run containers as root by default so this is an easy docker security to be followed. So you don’t have to make amendments to prevent running as a root by default in a default configuration. letting a container as a root is a temptation that needs to be resisted as it is more convenient in some situations.

If you use kubernetes to orchestrate your containers for added Docker security, you can explicitly prevent containers from starting as root. We can use MustRunAsNonRoot directive in a pod security policy.

#3 Secure container registeries

Docker is powerful because of the container registeries.It makes it easy to set central repositories which helps us in downloading the container images.

Using the container registries is a security risk if one does not know the evaluation of the security constraints.We can use Docker Trusted Registry  which can be installed in the firewalls to eradicate the risk of viruses.

The registry can be accessed from the back of firewalls and we can limit the unknown access of uploading and downloading images from our registry. Using role based access can control explicitly of unknown users or access.It is nice to leave our registry open to others but it is useful only if it stops the access of viruses and harmful things.

#4 Use of trusted and secure images

We should be sure that the the images or  container images  we use are from a trusted source. This is obvious but there are many platforms from where we can download images and they might not be trusted or verified.

One should consider not using public container registries or try to use official trusted repositories, like the ones on Docker Hub.

One can use image scanning tools which help to identify harmful sources . Mostupper level containerhave embedded scanning tools. The ones like Clair.

#5 Identify the source of your code

Docker images contain some original code and packages from upstream sources. sometimes the image downloaded can come from a trusted registry, the image can have packages from untrusted sources. these unknown packages can be made up of code taken from multiple outside sources.

That is why analysis tools are important. Downloading the sources of the Docker images and scanning the code origin we can know if any of the code is from unknown sources.

#6 network security and API

As we have seen above Docker containers depend on APIs and networks for communication. It is important to make sure that your APIs and network architectures are secure and monitoring the APIs and network activity for any unusual activity must also be checked.

As APIs and networks are not a part of Docker and are resources of Dockers so steps for securing APIs and networks are not included in this article. But it is important to check the security of the sources.

In Conclusion

Docker is a complex concept and having no simple trick for maintaining Docker container security. But one has to think carefully about steps to secure your Docker containers, and strengthen your container environment at many levels. This is the only way to ensure that you can have all the benefits of Docker containers without having major security issues.

Developing a GraphQL server in Next.js

Next.js  is thought of as a frontend React framework. It provides server-side contribution, built-in routing process and many features related to performance. As Next.js supports the  API routes, so it provides the backend and frontend to React, in the same package and setup.

We will learn in this article, how to use the API routes in setting up a GraphQL API inside the Next.js app. It starts with basic setup and some concepts of CORS, loading of data from Postgres using the Knex package, thus improves the performance by using DataLoader package and avoids costly queries of N+1.

you can view source code here.

Setting Next.js

To set up Next.js run the command npx create-next-app. You can install npx by this command  npm i -g npx , it installs it globally on your system.

An example can be used to setup Next.js with a GraphQL API:

 npx create-next-app --example api-routes-graphql.

 Addition of an API route

With Next.js setup, we’re going to add an API (server) route to our app. This is as easy as creating a file within the pages/api folder called graphql.js. For now, its contents will be:

export default (_req, res) => {
  res.end("GraphQL!");
};

What we want to produce

Now we want to try loading data efficiently from our Postgres database:

{
  albums(first: 5) {
    id
    name
    year
    artist {
      id
      name
    }
  }
}

output:

{
  "data": {
    "albums": [
      {
        "id": "1",
        "name": "Turn It Around",
        "year": "2003",
        "artist": {
          "id": "1",
          "name": "Comeback Kid"
        }
      },
      {
        "id": "2",
        "name": "Wake the Dead",
        "year": "2005",
        "artist": {
          "id": "1",
          "name": "Comeback Kid"
        }
      }
    ]
  }
}

GraphQL Basic setup

there are four steps to setup GraphSQL:

  1. Definition of type which helps in describing the schema of GraphSQL.
  2. Creation of resolvers: having ability of responding a query or any sudden change.
  3. creation of Apollo server.
  4. creation of handler which helps to join things to Next.js API for requesting the response from lifecycle.

Now we import the gql function from apollo-server-micro , we can define our defining types that describes the schema of GraphQL server.

import { ApolloServer, gql } from "apollo-server-micro";

const typeDefs = gql`
  type Query {
    hello: String!
  }
`;

As our schema is defined now , we can now write the code which will enable the server in answering the queries and sudden changes. This is called resolver and every field require function which will produce results. The resolver function gives result which align with the defined types.

The arguments received by resolver functions are:

  • parent: it is ignored on query level.
  • arguments: They would be passed in our resolver function and will allow us to access the field argument.
  • context: it is a global state and tells about the authenticated user or the global instance and dataLoader.
const resolvers = {
  Query: {
    hello: (_parent, _args, _context) => "Hello!"
  }
};

Passing  typeDefs and resolvers to new instance of ApolloServer gets us up and running:

const apolloServer = new ApolloServer({
  typeDefs,
  resolvers,
  context: () => {
    return {};
  }
});

By using  apolloServer one can access a handler, helps in handling the request and response lifecycle. One more config which is needed to be exported, which stops the body of incoming HTTP requests from being parsed, and required for GraphQL to work correctly:

const handler = apolloServer.createHandler({ path: "/api/hello" });

export const config = {
  api: {
    bodyParser: false
  }
};

export default handler;

Addition of CORS support

If we start or limit cross-origin requests by the use of CORS, we can add the micro-cors package to enable this:

import Cors from "micro-cors";

const cors = Cors({
  allowMethods: ["POST", "OPTIONS"]
});

export default cors(handler);

In the above case the cross-origin HTTP method is limited to POST and OPTIONS. It changes the default export to have the handler pass to cors function.

Postgres , Knex with Dynamic data

Complex coding can be boring in many ways… now is the time to load from existing database. So for this a setup is needed to be installed and the required package is:

yarn add knex pg.

Now create a  knexfile.js file, configure Knex so it helps in connecting to our database, The ENV variable is used for knowing the connection of database. You can have a look at the article setting up secrets. The local ENV variable looks like:

PG_CONNECTION_STRING=”postgres://[email protected]:5432/next-graphql”:

// knexfile.js
module.exports = {
  development: {
    client: "postgresql",
    connection: process.env.PG_CONNECTION_STRING,
    migrations: {
      tableName: "knex_migrations"
    }
  },

  production: {
    client: "postgresql",
    connection: process.env.PG_CONNECTION_STRING,
    migrations: {
      tableName: "knex_migrations"
    }
  }
};

Now we can create database to set our tables. Empty files can be created with commands like yarn run knex migrate.

exports.up = function(knex) {
  return knex.schema.createTable("artists", function(table) {
    table.increments("id");
    table.string("name", 255).notNullable();
    table.string("url", 255).notNullable();
  });
};

exports.down = function(knex) {
  return knex.schema.dropTable("artists");
};

exports.up = function(knex) {
  return knex.schema.createTable("albums", function(table) {
    table.increments("id");
    table.integer("artist_id").notNullable();
    table.string("name", 255).notNullable();
    table.string("year").notNullable();

    table.index("artist_id");
    table.index("name");
  });
};

exports.down = function(knex) {
  return knex.schema.dropTable("albums");
};

with the existing tables , run the following insert statements in Postico to set few dummy records:

INSERT INTO artists("name", "url") VALUES('Comeback Kid', 'http://comeback-kid.com/');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Turn It Around', '2003');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Wake the Dead', '2005');

The last step is creation of a connection to our DB within the graphql.js file.

import knex from "knex";

const db = knex({
  client: "pg",
  connection: process.env.PG_CONNECTION_STRING
});

New resolvers and definitions

Now remove the  hello query and resolvers, and replace them with definitions for loading tables from the database:

const typeDefs = gql`
  type Query {
    albums(first: Int = 25, skip: Int = 0): [Album!]!
  }

  type Artist {
    id: ID!
    name: String!
    url: String!
    albums(first: Int = 25, skip: Int = 0): [Album!]!
  }

  type Album {
    id: ID!
    name: String!
    year: String!
    artist: Artist!
  }
`;

const resolvers = {
  Query: {
    albums: (_parent, args, _context) => {
      return db
        .select("*")
        .from("albums")
        .orderBy("year", "asc")
        .limit(Math.min(args.first, 50))
        .offset(args.skip);
    }
  },

  Album: {
    id: (album, _args, _context) => album.id,
    artist: (album, _args, _context) => {
      return db
        .select("*")
        .from("artists")
        .where({ id: album.artist_id })
        .first();
    }
  },

  Artist: {
    id: (artist, _args, _context) => artist.id,
    albums: (artist, args, _context) => {
      return db
        .select("*")
        .from("albums")
        .where({ artist_id: artist.id })
        .orderBy("year", "asc")
        .limit(Math.min(args.first, 50))
        .offset(args.skip);
    }
  }
};

Now we can see that every single field is not defined for the resolvers. it is going to simply read an attribute from an object and defining the resolver for that field can be avoided. The id resolver can also be removed .

DataLoader and Avoiding N+1 Queries

There is an unknown problem with the resolvers used . The SQL query needes to be run for every object or with additional number of queries. There is a great article great article on this problem which helps to resollve the queries.

The step used firstly involves defining of the loader. A loader is used to collect IDs for loading in a single batch.

import DataLoader from "dataloader";

const loader = {
  artist: new DataLoader(ids =>
    db
      .table("artists")
      .whereIn("id", ids)
      .select()
      .then(rows => ids.map(id => rows.find(row => row.id === id)))
  )
};

 loader is passed  to our GraphQL resolvers, see below:

const apolloServer = new ApolloServer({
  typeDefs,
  resolvers,
  context: () => {
    return { loader };
  }
});

It allows us to update the  resolver to utilize the DataLoader:

const resolvers = {
  //...
  Album: {
    id: (album, _args, _context) => album.id,
    artist: (album, _args, { loader }) => {
      return loader.artist.load(album.artist_id);
    }
  }
  //...
};

The problem is solved as the end result shows a single query for database for loading all objects at once.. N+1 issue is resolved..

Conclusion

In this article, we were able to create a GraphQL server with CORS support, loading data from Postgres, and stomping out N+1 performance issues using DataLoader. Not bad for a day’s work! The next step might involve adding mutations along with some authentication to our app, enabling users to create and modify data with the correct permissions. Next.js is no longer just for the frontend (as you can see). It has first-class support for server endpoints and the perfect place to put your GraphQL API.

Failure of Monitor and production of slow GraphQL requests

GraphQL h some debugging features which debugs request and responses, without effecting the production app. If one can ensure request related to networks then third party services can be used. try LogRocket.

https://logrocket.com/signup/

LogRocket is used for web apps like a DVR, it records everything that is happening on the site. Instead of finding the reason of problems , we can aggregate and report on problematic GraphQL requests to quickly understand the root cause.

We can also track Apollo client state and inspect GraphQL queries’ and instruments like key-value pairs.LogRocket to record baseline performance timings like loading time of page, timimg of first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.

GraphQL , TypeScript and PostgreSQL API

Introduction

The most popular stack these days is GraphQl and Typescript. i used Vanilla JavaScript in one of my recent project but i have used Typescript many times. I never used this but i followed a tutorial which helped me alot so i thought of guiding others too. Before starting let us see:

This image has an empty alt attribute; its file name is nnnutbfl.jpg

Why GraphQL, TypeScript and PostgreSQL ?:

The description in our API is provided by GraphQL. It helps in understanding the needs of the clients and helps us while dealing with large amount of data , as one can have all the data by running only one query.

Typescript is used as a super set of java script. When java script code takes more compliance time and become messier to reuse or maintain we can use typescript instead.

PostgreSQl is based on personal preference and is opensource. you can view the following link for ore details.

https://www.compose.com/articles/what-postgresql-has-over-other-open-source-sql-databases/

Preconditions

  1. yarn NPM can be used
  2. node: v.10 or superior
  3. postgresql = 12
  4. basic typescript knowledge

Structure of folder

project is structured in following way:

graphql_api/
       ...
        dist/
          bundle.js
        src/
         database/
              knexfile.ts
              config.ts
              migrations/
              models/
                User.ts
                Pet.ts
          __generated__/
          schema/
              resolvers/
                  user.ts
                  pet.ts
                  index.ts

              graphql/
                  schema.ts
              index.ts/
          index.ts 

Dependencies

  • Apollo server: it is open source graphsql server maintained by the community. It works by using node.js and HTTP frameworks.
  • Objection: sequelize can also be used but objection.js is better because it is an ORM that embraces SQL.

Development

  • Webpack : webpack can be used to compile JavaScript modules, node.js do not accept files like .gql or .graphql , that’s why we use webpack. install the following
yarn add graphql apollo-server-express express body-parser objection pg knex

and some dependencies of dev:

yarn add -D typescript @types/graphql @types/express @types/node  graphql-tag concurrently nodemon ts-node webpack webpack-cli webpack-node-external

Configuration

use command tsconfig

{
  "compilerOptions": {
  "target": "es5",                          /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'. */
    "module": "commonjs",                     /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', 'es2020', or 'ESNext'. */
                      /* Concatenate and emit output to single file. */
     "outDir": "dist",                        /* Redirect output structure to the directory. */
     "rootDir": "src",                       /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
   
    "strict": true,                           /* Enable all strict type-checking options. */
 
     "moduleResolution": "node",            /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */
  
 "skipLibCheck": true,                     /* Skip type checking of declaration files. */
    "forceConsistentCasingInFileNames": true  /* Disallow inconsistently-cased references to the same file. */
  },
  "files": ["./index.d.ts"]
}

Webpack

const path = require('path');
const {CheckerPlugin} = require('awesome-typescript-loader');
var nodeExternals = require('webpack-node-externals');

module.exports = {
  mode: 'production',
  entry: './src/index.ts',
  target:'node',
  externals: [nodeExternals(),{ knex: 'commonjs knex' }],
  output: {
    path: path.resolve(__dirname, 'dist'),
    filename: 'bundle.js'
  },
  resolve: {
    extensions: [ ".mjs",'.js', '.ts','.(graphql|gql)'],
    modules: [
        
        'src',
    ]
},
  module:{
      rules:[
        {
            test: /\.(graphql|gql)$/,
            exclude: /node_modules/,
            loader: 'graphql-tag/loader'
        },
        {
            test: /\.ts$/,
            exclude: /node_modules/,
            loaders: 'awesome-typescript-loader'
        }
      ]
  },
  plugins:[
    new CheckerPlugin(),
  ]
  
};

Hello World example

add the following script to the package.json file:

"scripts":{
     "dev": "concurrently \" nodemon ./dist/bundle.js \" \" webpack --watch\" "
}

index.ts

import express, { Application } from 'express';
import {  ApolloServer , Config } from 'apollo-server-express';


const app: Application  = express();

const schema = `
    type User{
        name: String
    }
    type Query {
        user:User
    }
`
const config : Config = {
    typeDefs:schema,
    resolvers : {
        Query:{
            user:(parent,args,ctx)=>{
                return { name:"WOnder"}
            }
        }
    },
    introspection: true,//these lines are required to use the gui 
    playground: true,//   of playground

}

const server : ApolloServer = new ApolloServer(config);

server.applyMiddleware({
    app,
    path: '/graphql'
  });

app.listen(3000,()=>{
    console.log("We are running on http://localhost:3000/graphql")
})

Server config

we will use , Executable schema from graphql-tools. It allow us to generate GraphQLSchema and allow to join the types or resolvers from large number of files.

src/index.ts

...
const config : Config = {
    schema:schema,// schema definition from schema/index.ts
    introspection: true,//these lines are required to use  
    playground: true,//     playground

}

const server : ApolloServer = new ApolloServer(config);

server.applyMiddleware({
    app,
    path: '/graphql'
  });
...

schema/index.ts

import { makeExecutableSchema} from 'graphql-tools';
import schema from './graphql/schema.gql';
import {user,pet} from './resolvers';

const resolvers=[user,pet];

export default makeExecutableSchema({typeDefs:schema, resolvers: resolvers as any});

Database

lets see the database diagram including a registry of users and their pets.

Migration file

for the creation of databse in postgres we use migration files of knex

require('ts-node/register');

module.exports = {
  development:{
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
      },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    },
    timezone: 'UTC'
  },
  testing:{
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
      },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    },
    timezone: 'UTC'
  },
  production:{
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
      },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    },
    timezone: 'UTC'
  }
};

a first migration running file will be created:

npx knex --knexfile ./src/database/knexfile.ts migrate:make -x ts initial

and the migration file seems like this

import * as Knex from "knex";


export async function up(knex: Knex): Promise<any> {
    return knex.schema.createTable('users',(table:Knex.CreateTableBuilder)=>{
        table.increments('id');
        table.string('full_name',36);
        table.integer('country_code');
        table.timestamps(true,true);

    })
    .createTable('pets',(table:Knex.CreateTableBuilder)=>{
        table.increments('id');
        table.string('name');
        table.integer('owner_id').references("users.id").onDelete("CASCADE");
        table.string('specie');
        table.timestamps(true,true);
    })
}


export async function down(knex: Knex): Promise<any> {
}

press run for migration file

npx knex --knexfile ./src/database/knexfile.ts migrate:latest

now there are two tables in database and we need models for each table to execute queries, src/database/models:

import {Model} from 'objection';
import {Species,Maybe} from '../../__generated__/generated-types';

import User from './User';

class Pet extends Model{
    static tableName = "pets";
    id! : number;
    name?: Maybe<string>;
    specie?: Maybe<Species>; 
    created_at?:string;
    owner_id!:number;
    owner?:User;

    static jsonSchema ={
        type:'object',
        required:['name'],

        properties:{
            id:{type:'integer'},
            name:{type:'string', min:1, max:255},
            specie:{type:'string',min:1, max:255},
            created_at:{type:'string',min:1, max:255}
        }
    };

    static relationMappings=()=>({
        owner:{
            relation:Model.BelongsToOneRelation,
            modelClass:User,
            join: {
                from: 'pets.owner_id',
                to: 'users.id',
              }
        }
    });

    
};

export default Pet;


import {Model} from 'objection';
import {Maybe} from '../../__generated__/generated-types';
import Pet from './Pet';



class User extends Model{
    static tableName = "users";
    id! : number;
    full_name!: Maybe<string>;
    country_code! : Maybe<string>;
    created_at?:string;
    pets?:Pet[];

    static jsonSchema = {
        type:'object',
        required:['full_name'],

        properties:{
            id: { type:'integer'},
            full_name:{type :'string', min:1, max :255},
            country_code:{type :'string', min:1, max :255},
            created_at:{type :'string', min:1, max :255}
        }
    }

    static relationMappings =()=>({
        pets: {
            relation: Model.HasManyRelation,
           modelClass: Pet,
            join: {
              from: 'users.id',
              to: 'pets.owner_id'
            }
          }
    })
}

export default User;

now we instantiate Knex and provide the instance to Objection

import dbconfig from './database/config';
const db = Knex(dbconfig["development"]);

Model.knex(db);

SCHEMA

enum Species{
    BIRDS,
    FISH,
    MAMMALS,
    REPTILES
}

type User {
    id: Int!
    full_name: String
    country_code: String
    created_at:String
    pets:[Pet]
}

type Pet {
    id: Int!
    name: String
    owner_id: Int!
    specie: Species
    created_at:String
    owner:User
}

input createUserInput{
    full_name: String!
    country_code: String!
}

input createPetInput{
    name: String!
    owner_id: Int!
    specie: Species!
}

input updateUserInput{
    id:Int!
    full_name: String
    country_code: String
}


input updatePetInput{
    id:Int!
    name: String!
}

type Query{
    pets:[Pet]
    users:[User]
    user(id:Int!):User
    pet(id:Int!):Pet
}

type Mutation{
    createPet(pet:createPetInput!):Pet
    createUser(user:createUserInput!):User
    deletePet(id:Int!):String
    deleteUser(id:Int!):String
    updatePet(pet:updatePetInput!):Pet
    updateUser(user:updateUserInput!):User
}

generating types

below packages are required for better type safeting the resolvers :

 yarn add -D @graphql-codegen/cli @graphql-codegen/typescript @graphql-codegen/
typescript-resolvers @graphql-codegen/typescript-operations 

create the config file for generate types :

/codegen.yml
overwrite: true
schema: "http://localhost:3000/graphql"
documents: null
generates:
  src/__generated__/generated-types.ts:
    config:
      mappers:
        User:'./src/database/User.ts'
        UpdateUserInput:'./src/database/User.ts'
        Pet:'./src/database/Pet.ts'
    plugins:
      - "typescript"
      - "typescript-resolvers"

add below script to packages.json :

...
"generate:types": "graphql-codegen --config codegen.yml"
...

when server is up , then run :

yarn run generate:types

for generation of types from graphql read from here, it is highly suggested

resolvers

schema/resolvers/

import {Pet,User} from '../../database/models';
import {Resolvers} from '../../__generated__/generated-types';
import {UserInputError} from 'apollo-server-express';


const resolvers : Resolvers = {
    Query:{
        pet:async (parent,args,ctx)=>{
            const pet:Pet= await Pet.query().findById(args.id);

             return pet;          
        },
        pets: async (parent,args,ctx)=>{
            const pets:Pet[]= await Pet.query();

            return pets;

        }
    },
    Pet:{
        owner:async(parent,args,ctx)=>{
            const owner : User = await Pet.relatedQuery("owner").for(parent.id).first();

            return owner;
        }
    },
    Mutation:{
        createPet:async (parent,args,ctx)=>{
            let pet: Pet;
            try {
                 pet  = await Pet.query().insert({...args.pet});
               
            } catch (error) {
                throw new UserInputError("Bad user input fields required",{
                    invalidArgs: Object.keys(args),
                  });
                
            }
            return pet;
        },
        updatePet:async (parent,{pet:{id,...data}},ctx)=>{
            const pet : Pet = await Pet.query()
                                    .patchAndFetchById(id,data);

            return pet;
        },
        deletePet:async (parent,args,ctx)=>{
            const pet = await Pet.query().deleteById(args.id);
            return "Successfully deleted"
        },
    }
}


export default resolvers;
import { Resolvers} from '../../__generated__/generated-types';
import {User,Pet} from '../../database/models';
import {UserInputError} from 'apollo-server-express';

interface assertion {
    [key: string]:string | number ;
}

type StringIndexed<T> = T &amp; assertion;

const resolvers : Resolvers ={
    Query:{
        users: async (parent,args,ctx)=>{
            const users : User[] = await User.query();
            return users;
        },
        user:async (parent,args,ctx)=>{
            const user :User = await await User.query().findById(args.id);

           return user;
        },
    },
    User:{
        pets:async (parent,args,ctx)=>{
            const pets : Pet[] = await User.relatedQuery("pets").for(parent.id);

            return pets;
        }
        
    },
    Mutation:{
        createUser:async (parent,args,ctx)=>{
            let user : User;
            try {
                user = await User.query().insert({...args.user});
            } catch (error) {
                console.log(error);
               throw new UserInputError('Email Invalido', {
                   invalidArgs: Object.keys(args),
                 });
            }
            return user;
        },
        updateUser:async (parent,{user:{id,...data}},ctx)=>{

            let user : User = await User.query().patchAndFetchById(id,data);

            return user;

        },
        deleteUser:async (parent,args,ctx)=>{
            const deleted = await User.query().deleteById(args.id);
            return "Succesfull deleted";
        },

    }
}


export default resolvers;

this will help to execute all the operations defined before

BONUS

two errors can be seen

its not bad to have errors , i prefer not to have errors, after this the first error is resolved by splitting knexfile.ts then put the required configuration for knex in a separate file.

const default_config = {
    client: 'pg',
    connection: {
        database: "db",
        user: "user",
        password: "password"
      },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    },
    timezone: 'UTC'
  }
  interface KnexConfig {
    [key: string]: object;
  };
  const config : KnexConfig = {
    development:{
      ...default_config
    },
    testing:{
      ...default_config
    },
    production:{
      ...default_config
    }
  };

  export default config;
require('ts-node/register');
import config from './config';


module.exports= config["development"]

the second got resolved from importing from schema and taking help from this  useful post . now we should have working in our own graphql api

CONCLUSION

yayy! now we have a GraphQL API . So we have learnt generating types for Typescript from graphql and solving issues. I hope you got help from this tutorial .I’ll be posting more soon. Give suggestions in comment box. thankyou.


React.JS – Best and Most Underrated Pattern Design

<Modal showCloseButton>
	<title>Modal title</title>
	<contents>Modal body text goes here.</contents>
	<dismissButton onClick={close}>Close</dismissButton>
	<actionButton> onClick={save}>Save</actionButton>
</Modal> 

The times are near when we would like to pass props and would want to control the behavior of child elements. For example, in the picture given below, you can see there are different elements:

  • A title section that is written at the top.
  • A cross button to close the action present at the top right corner.
  • A content area where you can write some text.
  • A button to save the changes, “Save changes”.
  • A dismiss button to close it, “Close”.

Now, if we want to reuse the Modal properly, we must modify the modal and its elements. This means that users will get control over things such as the content which is displayed, the dispatched events, the style of the content, and all other elements of the Modal that will be under the control of the user. Users can select a naïve solution for the acceptance of proposed for each element like they are:

<Modal
  showCloseButton
  showDismissButton
  showActionButton
  title="Modal title"
  contents="Modal body text goes here."
  dismissButtonText="Close"
  actionButtonText="Save changes"
  handleDismiss={close}
  handleAction={save}
/>

The problem with this kind of approach is that it spams the mechanism of propos, and this makes the whole component look less readable and more inflated. While on the one hand, it is less attractive, at the other end, it limits the amount of propos that users can pass to child elements. This method also prevents users from having complete control over the elements. However, you can solve this problem through another way by providing generic props objects where every prop object is representing a different element:

<Modal
  showCloseButton
  title="Modal title"
  contents="Modal body text goes here."
  dismissButtonProps={{
    text: 'Close',
    handler: close,
  }}
  actionButtonProps={{
    text: 'Save changes',
    handler: save,
  }}
/>

However, this solution works fine, but it does not solve the issue of spam, and you are completely abusing the syntactic sugar that JSX provides you in this way. You are obligated to use JSONs style attributes instead of using the HTML style attribute assignments (attr= “value”).

Using Bootstrap for the rescue

They take a very shrewd approach in Bootstrap, and that is instead of defining props all over the places, they gave us the capability to manipulate the Modal’s children directly. Now, we can achieve this intended functionality by using the dedicated components that Bootstrap was aiming for:

<Modal.Dialog>
  <Modal. Header closeButton>
    <Modal.Title>Modal title</Modal.Title>
  </Modal.Header>

  <Modal.Body>
    <p>Modal body text goes here.</p>
  </Modal.Body>

  <Modal.Footer>
    <Button variant="secondary" onClick={close}>
      Close
    </Button>
    <Button variant="primary" onClick={save}>
      Save changes
    </Button>
  </Modal.Footer>
</Modal.Dialog>

As you can see, the progress is there, and things are getting better, but we can take it to the next level. 

Although things are very declarative and clear and we can stay here, but we are still obligated to compose an entire modal to the next level. But this would mean that we would not be able to use the Modal’s children to cover up the missing pieces, as we implemented the part of the logic before. Normally, it is not like that we have to write the Modal’s content from scratch.

Moreover, you can use it as a template for your use in future cases. However, there is no filter or restriction on the children’s input, and you can use it the way you want. But normally, we would like it if the user uses only a few limited elements so that he does not mess up the things, and if that is the case, we must see what could be the right approach to follow it.

Introducing the design pattern that contains everything

Now, if we see at our last progress and calculate what we gathered so far, the new pattern should have the following attributes:

  • Has complete control over child elements using props.children.
  • Has a template already in place for the user.
  • No spamming of the mechanism of props.
  • Has restrictions on the input.

Well, this sounds good and promising so let’s take a look at an example and we will be using Bootstrap Modal component for it as an anchor:

const ModalFromTheFuture = ({ showCloseButton, children }) => {
  const childProps = useChildProps(props.children, [
    'title',
    'contents'
    'dismissButton',
    'actionButton',
  ]);

  return (
    <Modal.Dialog>
      <Modal.Header closeButton={showCloseButton}>
        {childProps.title &amp;&amp; <Modal.Title {...childProps.title} />}
      </Modal.Header>

      <Modal.Body>
        {childProps.contents &amp;&amp; <p {...childProps.contents} />}
      </Modal.Body>

      <Modal.Footer>
        {childProps.actionButton &amp;&amp; <Button {...childProps.actionButton} variant="secondary" />}
        {childProps.dismissButton &amp;&amp; <Button {...childProps.dismissButton} variant="primary" />}
      </Modal.Footer>
    </Modal.Dialog>
  );
};

Now you can clearly view that the new modal component uses a hook known as “useChildProps(). Moreover, this hook will basically flatten nested propos by going through the ‘props.children’.  Furthermore, when it will come to make sure that right names are addressed, this method will validate them against a provided white list.

const useChildProps = (children, whitelist) => {
  return useMemo(() =>
    [].concat(children).reduce(
      (childProps, child) => {
        if (whitelist &amp;&amp; !whitelist.includes(child.type)) {
          throw Error(`element <${child.type}> is not supported`);
        }

        childProps[child.type] = child.props;

        return childProps;
      },
      [children]
    )
  );
};
<ModalFromTheFuture showCloseButton>
  <title>Modal title</title>
  <contents>Modal body text goes here.</contents>
  <dismissButton onClick={close}>Close</dismissButton>
  <actionButton onClick={save}>Save changes</actionButton>
</ModalFromTheFuture>

However, we know it will cause confusion with native HTML tag names, but this can be said about any other React component being used by other users or us. But if you see the changing trends, such as the introduction of component-based user-interface, for example, Angular, Vue, React, or other web components, so new tag names are not rare now. Therefore, you should go for the new design and must not stay afraid of getting hands on it.

How TYPESCRIPT can be used with NODE and EXPRESS?

In the start of my career , i was a web developer and i noticed that strong typed languages were dominant in market. I was impressed with new upcoming features in java script and python language. The idea of not declaring the variables made me more productive and i used to like my work. the first time i heard about Typescript the idea seemed like taking steps back to old days.

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

what changed my mind?

I had a strategy that, for individual projects I used simpler languages like java script but while working in a team or working on large scale I preferred using typescript. in this article i will try to explain the reason.

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

TYPESCRIPT

If you don’t have any know how of typescript i recommend to read the following overview:

https://www.tutorialspoint.com/typescript/typescript_overview.htm

and view the following: https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html

Comparison of Productivity and Maintenance

Typescript is an object oriented language used for app development. This means that the setup of project is remunerated with the maintenance of large scale projects. Lets see why it is so:

type safe= less errors

When we define the type in our code we allow the IDE to find out the errors and bugs while using the functions and classes that would be viewed by us at runtime.

Example:

function add (a: number , b: number):
number { return a+b;}
let mySum: number;
mySum=add(1,"two");
let myText: String;
myText=add(1,2);

I have used visual studio code and there is an error i.e “Two”

on line 4: error as string parameter is passed to a function that can only accept number

on line 6: error as assign to a string , the output gives a number .

these two errors would have gone unnoticed without typescript, showing bugs at runtime.

Project Modules exposed by IDE

There are hundreds of classes which are scattered across different files. When we declare types the IDE finds the origin files and relate the objects with them.

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express
Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

Programmers face difficulty in maintaining this language that is why they avoid transferring from java and C# to js but typescript helps in overcoming this obstacle.

Setting up express project with Typescript

lets see the steps taken to set up project in Express.js project :

npm init

install typescript package

npm install typescript -s

Typescript node package

Typescript cant be run in node.js engine rather it only runs java script. The node typescript package is used as transpiler for .ts files to .js scripts. Babel is used as typescript for trans-piling. however the market standard is to use Microsoft package for this purpose.

the script tsc is put inside the package.json :

“scripts”: {
“tsc”: “tsc”
},

This can be used to call typescript functions by using command line. below command is used:

npm run tsc — –init

this help in initializing the typescript by creating the file tsconfig.json . In this file we will remove comment i.e  outDiroption and choose transpile location to deliever .js files :

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

Express.js installation

npm install express -s

As typescript dont know express class type so express and typescript are independent. A special npm package can be used to recognize express class:

npm install @types/express -s

HELLO WORLD

Below is an example:

https://expressjs.com/pt-br/starter/hello-world.html

var express=require( 'express' ) ; 
 var app = express(); 
app.get( '/'function (req,res){
res . send( 'Hello World! ' ) ;
});
 app.1isten(3000, function ( )
 { 
console. log( ' Example app listening on port 3000! ')
});

now create folder app and inside it create file app.ts using following code:

// lib/app.ts
import express = require('express');

// Create a new express application instance
const app: express.Application = express();

app.get('/', function (req, res) {
  res.send('Hello World!');
});

app.listen(3000, function () {
  console.log('Example app listening on port 3000!');
});

COMPILATION OF FIRST APPLICATION

npm run tsc

this command creates a build folder and .js file:

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

Running EXPRESS

node build/app.js

on port 3000 we can see our output:

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

Transcript running by not using transpiling

Typescript can be run directly on the node by using the package of ts-node . it is used for development purpose . For final deployment or embedding use java script code. version .The ts-node is included as dependent component on another package, tts-node-dev. After the installation of ,ts-node-dev we should run commands to restart server whenever any change is made in original project files .

npm install ts-node-dev -s

some more scripts are added in package.json:

“scripts”: {
“tsc”: “tsc”,
“dev”: “ts-node-dev –respawn –transpileOnly ./app/app.ts”,
“prod”: “tsc && node ./build/app.js”
},

start dev environment :

npm run dev

run server :

npm run prod