Category

TypeScript

5 Ways to Instantly Improve Your Angular Codebase

Angular is not that easy. It requires deep learning. Building easy to read and maintain apps is no doubt an art. This article shares 5 ways to improve your Angular codebase quality. It includes everything from naming your file, complicated topics such as redux to state management. Learn how you can use all these tips for improving the way you code your Angular apps.

Let’s Begin!

1. Follow the Rules

People choose Angular over other frameworks for its rules. Angular app framework is clear about how things are to be done. This means that it comes with certain rules of its own which are to be followed to create a uniform code base across organization. 

This approach is quite useful when working across cooperation borders. That’s because it helps the newcomers to gel into the team quickly due to familiarity with the code.

In other words, you need to follow Angular design guidelines to get the most out of its framework. This will not only add quality to your code but will also make your life a lot easier.

Given below are a set of rules which you may have already familiar with. Angular Style Guide

“We  love to do things our way! We don’t want to follow someone else’s rules!”

If you don’t want to follow Angular’s rules, then you should not choose it as your front-end framework. A number of frameworks are available to suit your expectations.  You won’t feel happy working with Angular.

Naming the Files 

Naming Files is one example of some of the Angular’s rules you have to follow. Files in Angular have a very particular theme, also known as the naming convention. Every file containing an angular-structure, like a component, a pipe or a module is named in this way:

[name].[structure].[file-extension]

So, if you want to create a component to display to customers, name it “customer.” The structure would be a component and the file extension which is either “.ts”, “.css” or “.html”.

Custumer.component.ts

The Angular-cli takes care of all this stuff. It uses ng-generate command to create a structure. The file created as a result follows the naming convention. Check this tutorial to learn more about angular-cli.

2. Group Code into Modules

Placing everything into the app-module is common among developers and messes up everything. Try to avoid it and use modules. 

Modules help to organize your code into small chunks. This makes it easy to read and find errors when troubleshooting it. In addition to the cosmetic advantage, you also get to increase the user-experience by downloading only those parts that require working. 

Read a guide on modules to learn about modules if you are unfamiliar with them. However, don’t structure your modules the way you want. This would only make things worse. Luckily, Angular has defined some ways to help you structure your apps into modules.

Feature Modules

Feature Modules are one of the categories of available modules in the Angular framework. As the name gives it away, they are used to include one specific feature. These modules are created in another folder with the feature name.

For instance, the feature module for the feature “feature” is included into a directory named feature. This module follows the naming convention shared above: feature.module.ts.

Why do you need feature modules?

They structure our code in a way that makes it easy to understand and read. They also mark different features. This helps in overcoming any confusion or potential bugs that are otherwise caused due to overlapping. 

Another benefit of the feature module is lazy loading. Lazy loading is a technique which helps in downloading only the required module to a client’s device. The other modules are not downloaded.

For instance, in case of an administrative section of a blog, it is unwise to serve that code to every user visiting that site.

This code is separated into the admin section and placed into a feature module. It is loaded with the help of lazy loading. When the user visits the site, he/she only downloads the code for the blog section when visiting the blog. The other JavaScript is only loaded when he/she visits other sections.

Core and Shared-Module

Feature modules encapsulate everything into a separate module. This way it wont be used in other parts of the application, without importing it. However, in some situations, it won’t make much sense.

Going back to the same example of the blog section, suppose we have to import the admin-module to use a simple utility-directive. This would make things quite confusing and also rule out the benefits of lazy loading. For this reason, Core and Shared modules are used.

Shared Modules

  • Shared modules are used for pieces of your application that need to be used across several areas (features) of your application.
  • If a component is going to be re-used in several features, then it will be considered as a shared module.
  • Services and Pipes are usually stated as shared modules.
  • Shared modules provide a way to share common pieces to fill out feature module “sections”.

A text-formatting module is a good example of a shared module. It contains a bunch of pipes to format text in specific manner.

This module is then used by all the feature modules without breaking the encapsulation of the other modules.

Core Module

The feature and shared modules are not enough for covering our requirements. We also require another module to place the  once used app-wide services. These are encapsulated into CoreModule in a directory known as “core.”

We mention all our app-wide services used just once in this module. It is imported into the app-module.

This keeps our app-module nice and clean.

However, the core-module is not used only for services. Everything which is used app-wide but is not suitable for a shared module can be done in the core-module.

Loading spinners at the start of an app are a good example. They are not used anywhere in the app which is why creating an extra shared module is highly unsuitable for them.

Doing Private Services in Components

Usually the services in angular are provided on a global scope. But some are also provided at an application level. The global scope will only be helpful if the practice of the global-singleton-pattern is compulsory. For example if your service is responsible for storing things, you require one global instance. Otherwise, every component has its separate cache due to the scoped dependency injection in angular.

There are other services that do not need to be provided globally and are used by just one component. It’s better to provide that service inside of the component, instead of the service. Especially when that service is linked to that component.

Otherwise you would have to define the services in a module to make it accessible everywhere it may be needed.

This makes services related to features (feature-modules), which makes them easier to find and understand in the right context. This also enables benefits of lazy-loading capabilities. It also reduces the hazard of dead code.

3. Don’t Use Logic in Your Components

Keeping Logic outside your components is always a good idea. This also increases the quality of your code.

There are following reasons why you should keep your logic out of your components:

  • Testing the user-interface and testing components testing is quite difficult in comparison to pure logic testing. This is why your business logic should be in a separate service.
  • Secondly having your business logic in a separate service can help you write more effective tests efficiently and quickly. Other components can also use your logic,  when placed separately as a service. It helps to reuse more of the code and consequently write less of it.  Code that does not exist also increases the code-quality more.
  • Last but not the least the code becomes easy to read when you have logic in a separate file.

State

If we talk about the state, there are a lot of challenges that arise from each component having its own state. It confuses you and makes you lose track of which component is in which state fast. That can make fixing errors quite difficult and results in errors that no one wants to have. This could be a big problem especially in large applications.

4. Make Sure Your Async Code is Correct

As discussed above angular is a framework with strict rules to achieve code-consistency. Same is the case with asynchronous code. The angular team uses the rxjs library for all asynchronous functions. The library makes use of the observer-pattern.

Avoid Promises

RxJs somewhat joins its functionality with the standard JavaScript promise. Both are predestined to handle asynchronous code but rxjs is far better. The purported rxjs-observables can resolve to more than just one value. This means that they are multiple values, you will have to see.

You can also pass only one result to that stream, which creates an overlap with the promise. One question comes to mind in this situation. 

What should we use? The simple promise which allows us to use the TypeScript await operator? or should we use the powerful rxjs-observables? What if we use them both.

Here is my opinion;

I usually like the style of the await operator for promises, but according to my point of view, we should stick to the opinion of the framework. and that is to use rxjs everywhere.

Use rxjs everywhere.

We can see that, by observing at the angular HTTP-client, it yields rxjs-observables, even when it is clear,  A HTTP-call can never give you output in more than one response.

Joining it up will not be a good solution. That way you get different implementations which are also not compatible with each other within the application. This is not something you would want to do.

Using the Async Pipe

As stated above, rxjs-observables are a little complicated. Using them incorrectly can lead to serious bugs.

The most communal mistake I make is forgetting to unsubscribe from the observable. This not only causes memory leaks, but also results in unwanted calculations and changes in your application.

public result;
  ngOnInit() {
    this.http.get('').subscribe(result => {
      this.result = result;
    })
  }

But evading this mistake is easy in angular. You must use the angular async pipe. This pipe will inevitably unsubscribe from the observable, once the component is deleted.

public result$;
  ngOnInit() {
    this.result$ = this.http.get('');
  }

and stick in the pattern to the observable using the async pipe:

<p&gt;{{result$ | async}}</p&gt;

This way the code looks simple and clean.

5. Use a Central State Management (Such as Redux)

As your app becomes larger, the code-quality can decline intensely. Hundreds of components, each having their own state not only become confusing but also becomes difficult to debug at the same time.

Centralized state management is the solution in all such conditions. What is a centralized state management? Centralized state management states that all of our application state is stored in one single location, instead of being dispersed all over the app. The overall state is controlled by one instance, that is the only one to make changes to the state. There are many advantages of this state management.

Centralized state management is the solution in all such conditions. What is a centralized state management? Centralized state management states that all of our application state is stored in one single location, instead of being dispersed all over the app. The overall state is controlled by one instance, that is the only one to make changes to the state. There are many advantages of this state management. 

  • You don’t have to search for it. As it is all in one place, you don’t need to search through the component tree.
  • It’s easy to transfer between applications or limits the state to disk. It does not have to be obtained from several places, as it’s just one object.
  • Problems like component to component communication are resolved by this, as well. They just react to state-changes.
  • Based on which form of the central state management you select; you also get nice features like time-travel-debugging (Redux/ngrx).

Should Redux/ngrx be used?

Again, there are different opinions about this out there. Here is my point of view;

According to my personal view I don’t think that everyone should begin re-writing their apps to include redux. Even if you start from scratch, I don’t think redux needs to be used in most cases.

It totally depends on the kind of application you want to build. Here are different conditions;

  • If you want to generate large applications with several components, developed by a large team then redux will be the best option.
  • In case of medium sized-applications, not larger than the average app available on the app store, working with around 10 people, Redux must be avoided. Thats because it comes with a variety of boilerplate code which would unnecessarily complicate your app.
  • It’s a big No in case of small apps.

Because in these medium and small size applications using redux would overcomplicate the code through its hundreds of boilerplate-files. I am not in favor of boilerplate code at all.

But there is a library that is under development and it provides zero boiler-plate code while working with redux and ngrx. Its called Angular-ngrx-data and is worth checking out.

Conclusion

I hope my 5 commendations on how to increase the quality of your angular code base will help you a lot. 

Share this article with your friends and colleagues and help them become a better angular developer.

Good Luck!

Challenges and Best Practices of Docker s Container security

In the recent years massive adoption rates of dockers have made the security an important point to consider for firms which are using these containers for the development and production of different things.Containers are complex when compared to virtual machines or other deployed technologies. The process to secure docker containers are also very complex.

We will take a view of docker security container and explain the reason behind the complexity of docker container. We will discuss the default environments for better security and practices to monitor containers for security.

Following is the complete guide for container security:

Challenges faced by dockers container security:

Many organisations used virtual machines or bare-metal servers before Docker to host applications. These technologies are quite simple when seen from a security perspective. When hardening your development and monitoring for security relevant events you need to focus on just two layers. As APIs, overlay networks or complex software defined storage configuration are not a major part of virtual machine or bare metal developments so you do not have to worry about these.

A typical Docker environment has many moving parts hence its security is much more complicated. Those moving parts include:

  • Probably you have multiple Docker container images and individual micro services will be hosted by each one of your containers. Also probably multiple intances of each imagine will be running at a time. Proper security and monitoring will be required for these intances and images.
  • To keep the containers and its host safe, the Docker daemon need to be secured.
  • Bare metal or virtual machine might be the host server.
  • Another layer to secure is service like ECS if you use it to host your containers.
  • Communication between containers is facilitated by APIs and Overlay networks.
  • Other storage system that exists externally from your containers is Data volume.

And if you are thinking that learning to secure Docker is tough because dockers security is undoubtely much more complex than any other security system.

Best practices of Docker container security:

Luckily we can overcome the challenges. this article is not a tiring guide to security of docker but you can use this official Docker documentation),as a reference. Below are some best practices:

#1 setting of reference quotes

One easy thing in docker is configuring of resource quotas. Resource quotas helps us to limit the memory amount and resources of cpu which is consumed by the container.

This is helpful for many reasons. It helps to keep the environment of docker efficient and saves one container from mixing with other system resources. It also increases the security by saving the container from using large space or resources so that it gets prevented from any harmful activity.

Resources quotas are easily set by use of commands. View this Docker documentation.

#2 Root should not be run

We all know the feeling when we are tired and dont want to get entangled in problems related to permission setting to get an application work properly so running in root is the only option left so you dont worry about issues related to permission restrictions.

if you are a beginner it is sometimes okay to use Docker testing environment but there is no reason good enough to let a Docker container run with roof permissions in production.

Because Docker doesn’t run containers as root by default so this is an easy docker security to be followed. So you don’t have to make amendments to prevent running as a root by default in a default configuration. letting a container as a root is a temptation that needs to be resisted as it is more convenient in some situations.

If you use kubernetes to orchestrate your containers for added Docker security, you can explicitly prevent containers from starting as root. We can use MustRunAsNonRoot directive in a pod security policy.

#3 Secure container registeries

Docker is powerful because of the container registeries.It makes it easy to set central repositories which helps us in downloading the container images.

Using the container registries is a security risk if one does not know the evaluation of the security constraints.We can use Docker Trusted Registry  which can be installed in the firewalls to eradicate the risk of viruses.

The registry can be accessed from the back of firewalls and we can limit the unknown access of uploading and downloading images from our registry. Using role based access can control explicitly of unknown users or access.It is nice to leave our registry open to others but it is useful only if it stops the access of viruses and harmful things.

#4 Use of trusted and secure images

We should be sure that the the images or  container images  we use are from a trusted source. This is obvious but there are many platforms from where we can download images and they might not be trusted or verified.

One should consider not using public container registries or try to use official trusted repositories, like the ones on Docker Hub.

One can use image scanning tools which help to identify harmful sources . Mostupper level containerhave embedded scanning tools. The ones like Clair.

#5 Identify the source of your code

Docker images contain some original code and packages from upstream sources. sometimes the image downloaded can come from a trusted registry, the image can have packages from untrusted sources. these unknown packages can be made up of code taken from multiple outside sources.

That is why analysis tools are important. Downloading the sources of the Docker images and scanning the code origin we can know if any of the code is from unknown sources.

#6 network security and API

As we have seen above Docker containers depend on APIs and networks for communication. It is important to make sure that your APIs and network architectures are secure and monitoring the APIs and network activity for any unusual activity must also be checked.

As APIs and networks are not a part of Docker and are resources of Dockers so steps for securing APIs and networks are not included in this article. But it is important to check the security of the sources.

In Conclusion

Docker is a complex concept and having no simple trick for maintaining Docker container security. But one has to think carefully about steps to secure your Docker containers, and strengthen your container environment at many levels. This is the only way to ensure that you can have all the benefits of Docker containers without having major security issues.

Using Jest for Unit Testing of Gatsby, Typescript and React Testing Library

The task to set up Jest and React Testing library for TDD using Gatsby is an easy one.Its tricky as i planned to use Typescripts in my test.

Firstly , i installed jestbabel-jest and babel-preset-gatsby ensuring the presence of babel preset(s) which can be used internally for Gatsby site.

npm install –save-dev jest babel-jest babel-preset-gatsby identity-obj-proxy tslint-react @types/jest

Configure Jest for checking its working with Gatsby

As Gatsby has its own babel configuration so we have to manually tell jest to use babel-jest. the gatsby website tells to create  jest.config.js file. look at the code below

jest.config.js

const path = require(“path”)

module.exports = {
setupFilesAfterEnv: [
path.resolve(dirname, “./jest-configs/setup-test-env.js”) ], transform: { // “^.+\.(tsx?|jsx?)$”: “ts-jest”, “\.svg”: “/jest-configs/__mocks/svgTransform.js” ,
“^.+\.(tsx?|jsx?)$”: <rootDir>/jest-configs/jest-preprocess.js,
},
moduleNameMapper: {
// “\.svg”: ./jest-configs/__mocks__/file-mocks.js,
“\.svg”: <rootDir>/jest-configs/__mocks__/svgTransform.js,
“typeface-“: “identity-obj-proxy”, “.+\.(css|styl|less|sass|scss)$”: identity-obj-proxy, “.+\.(jpg|jpeg|png|gif|eot|otf|webp|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$”: <rootDir>/jest-configs/__mocks__/file-mocks.js, }, testPathIgnorePatterns: [node_modules, .cache, public], transformIgnorePatterns: [node_modules/(?!(gatsby)/), \\.svg], globals: { PATH_PREFIX: “, }, testRegex: “(/tests/.|\.(test|spec))\.(ts|tsx)$”,
moduleFileExtensions: [
“ts”,
“tsx”,
“js”
],
collectCoverage: false,
coverageReporters: [
“lcov”,
“text”,
“html”
]
}

svgTransform.js

module.exports = {
process() {
return ‘module.exports = {};’;
},
getCacheKey() {
// The output is always the same.
return ‘svgTransform’;
},
};

The function of transform option is to tell Jest that all ts, tsx, js or jsx files should be transformed using a jest-preprocess.js file.

jest-configs/jest-preprocess.js

const babelOptions = {
presets: [
‘@babel/preset-react’,
‘babel-preset-gatsby’,
“@babel/preset-typescript”
],
};

module.exports = require(“babel jest”).createTransformer(babelOptions)

some code should also be put in setup-test-env.js .
The Jest Configuration docs explains the setupFilesAfterEnv: .... configuration option.

A list of direction or path to modules which run some code to configure or set up the testing framework before each test.

jest-configs/setup-test-env.js

import “@testing-library/jest-dom/extend-expect”

That should have Jest properly configured. Now, I’ll install the testing library and jest-dom as dev-dependencies with npm.

npm install –save-dev @testing-library/react @testing-library/jest-dom

Now run npx jestand now our code is good to go

SO NOW WE ARE GOOD TO GO

Now i will write my first test and run it. I like TDD because it is fast. We can write test before writing code. A test should fail at the beginning Read this up.
Now i will create a folder named __tests__ in my root folder of project. Then i will create a file named  test.spec.tsx and paste this code in it.

tests/test.spec.tsx

import React from "react"
import { render } from "@testing-library/react"

// You have to write data-testid
const Title = () => <h1 data-testid="hero-title">Gatsby is awesome!</h1>

test("Displays the correct title", () => {
  const { getByTestId } = render(<Title />)
  // Assertion
  expect(getByTestId("hero-title")).toHaveTextContent("Gatsby is awesome!")
  // --> Test will pass
})

Run commands like Yarn or npm install if you get errors like.

Cannot find module 'react' from 'test.spec.tsx'
    > 1 | import React from "react"

YAAYYY WE ACCOMPLISHED UNIT TESTING WITH TYPESCRIPT AND GATSBY AND JEST AND REACT TESTING LIBRARY

I am very happy with this . I will just start out Typescript with React so this was a great deal of learning for me. I’ll put up more post about writing real code using TDD. stay tuned

Developing a GraphQL server in Next.js

Next.js  is thought of as a frontend React framework. It provides server-side contribution, built-in routing process and many features related to performance. As Next.js supports the  API routes, so it provides the backend and frontend to React, in the same package and setup.

We will learn in this article, how to use the API routes in setting up a GraphQL API inside the Next.js app. It starts with basic setup and some concepts of CORS, loading of data from Postgres using the Knex package, thus improves the performance by using DataLoader package and avoids costly queries of N+1.

you can view source code here.

Setting Next.js

To set up Next.js run the command npx create-next-app. You can install npx by this command  npm i -g npx , it installs it globally on your system.

An example can be used to setup Next.js with a GraphQL API:

 npx create-next-app --example api-routes-graphql.

 Addition of an API route

With Next.js setup, we’re going to add an API (server) route to our app. This is as easy as creating a file within the pages/api folder called graphql.js. For now, its contents will be:

export default (_req, res) =&gt; {
  res.end("GraphQL!");
};

What we want to produce

Now we want to try loading data efficiently from our Postgres database:

{
  albums(first: 5) {
    id
    name
    year
    artist {
      id
      name
    }
  }
}

output:

{
  "data": {
    "albums": [
      {
        "id": "1",
        "name": "Turn It Around",
        "year": "2003",
        "artist": {
          "id": "1",
          "name": "Comeback Kid"
        }
      },
      {
        "id": "2",
        "name": "Wake the Dead",
        "year": "2005",
        "artist": {
          "id": "1",
          "name": "Comeback Kid"
        }
      }
    ]
  }
}

GraphQL Basic setup

there are four steps to setup GraphSQL:

  1. Definition of type which helps in describing the schema of GraphSQL.
  2. Creation of resolvers: having ability of responding a query or any sudden change.
  3. creation of Apollo server.
  4. creation of handler which helps to join things to Next.js API for requesting the response from lifecycle.

Now we import the gql function from apollo-server-micro , we can define our defining types that describes the schema of GraphQL server.

import { ApolloServer, gql } from "apollo-server-micro";

const typeDefs = gql`
  type Query {
    hello: String!
  }
`;

As our schema is defined now , we can now write the code which will enable the server in answering the queries and sudden changes. This is called resolver and every field require function which will produce results. The resolver function gives result which align with the defined types.

The arguments received by resolver functions are:

  • parent: it is ignored on query level.
  • arguments: They would be passed in our resolver function and will allow us to access the field argument.
  • context: it is a global state and tells about the authenticated user or the global instance and dataLoader.
const resolvers = {
  Query: {
    hello: (_parent, _args, _context) => "Hello!"
  }
};

Passing  typeDefs and resolvers to new instance of ApolloServer gets us up and running:

const apolloServer = new ApolloServer({
  typeDefs,
  resolvers,
  context: () => {
    return {};
  }
});

By using  apolloServer one can access a handler, helps in handling the request and response lifecycle. One more config which is needed to be exported, which stops the body of incoming HTTP requests from being parsed, and required for GraphQL to work correctly:

const handler = apolloServer.createHandler({ path: "/api/hello" });

export const config = {
  api: {
    bodyParser: false
  }
};

export default handler;

Addition of CORS support

If we start or limit cross-origin requests by the use of CORS, we can add the micro-cors package to enable this:

import Cors from "micro-cors";

const cors = Cors({
  allowMethods: ["POST", "OPTIONS"]
});

export default cors(handler);

In the above case the cross-origin HTTP method is limited to POST and OPTIONS. It changes the default export to have the handler pass to cors function.

Postgres , Knex with Dynamic data

Complex coding can be boring in many ways… now is the time to load from existing database. So for this a setup is needed to be installed and the required package is:

yarn add knex pg.

Now create a  knexfile.js file, configure Knex so it helps in connecting to our database, The ENV variable is used for knowing the connection of database. You can have a look at the article setting up secrets. The local ENV variable looks like:

PG_CONNECTION_STRING=”postgres://[email protected]:5432/next-graphql”:

// knexfile.js
module.exports = {
  development: {
    client: "postgresql",
    connection: process.env.PG_CONNECTION_STRING,
    migrations: {
      tableName: "knex_migrations"
    }
  },

  production: {
    client: "postgresql",
    connection: process.env.PG_CONNECTION_STRING,
    migrations: {
      tableName: "knex_migrations"
    }
  }
};

Now we can create database to set our tables. Empty files can be created with commands like yarn run knex migrate.

exports.up = function(knex) {
  return knex.schema.createTable("artists", function(table) {
    table.increments("id");
    table.string("name", 255).notNullable();
    table.string("url", 255).notNullable();
  });
};

exports.down = function(knex) {
  return knex.schema.dropTable("artists");
};

exports.up = function(knex) {
  return knex.schema.createTable("albums", function(table) {
    table.increments("id");
    table.integer("artist_id").notNullable();
    table.string("name", 255).notNullable();
    table.string("year").notNullable();

    table.index("artist_id");
    table.index("name");
  });
};

exports.down = function(knex) {
  return knex.schema.dropTable("albums");
};

with the existing tables , run the following insert statements in Postico to set few dummy records:

INSERT INTO artists("name", "url") VALUES('Comeback Kid', 'http://comeback-kid.com/');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Turn It Around', '2003');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Wake the Dead', '2005');

The last step is creation of a connection to our DB within the graphql.js file.

import knex from "knex";

const db = knex({
  client: "pg",
  connection: process.env.PG_CONNECTION_STRING
});

New resolvers and definitions

Now remove the  hello query and resolvers, and replace them with definitions for loading tables from the database:

const typeDefs = gql`
  type Query {
    albums(first: Int = 25, skip: Int = 0): [Album!]!
  }

  type Artist {
    id: ID!
    name: String!
    url: String!
    albums(first: Int = 25, skip: Int = 0): [Album!]!
  }

  type Album {
    id: ID!
    name: String!
    year: String!
    artist: Artist!
  }
`;

const resolvers = {
  Query: {
    albums: (_parent, args, _context) => {
      return db
        .select("*")
        .from("albums")
        .orderBy("year", "asc")
        .limit(Math.min(args.first, 50))
        .offset(args.skip);
    }
  },

  Album: {
    id: (album, _args, _context) => album.id,
    artist: (album, _args, _context) => {
      return db
        .select("*")
        .from("artists")
        .where({ id: album.artist_id })
        .first();
    }
  },

  Artist: {
    id: (artist, _args, _context) => artist.id,
    albums: (artist, args, _context) => {
      return db
        .select("*")
        .from("albums")
        .where({ artist_id: artist.id })
        .orderBy("year", "asc")
        .limit(Math.min(args.first, 50))
        .offset(args.skip);
    }
  }
};

Now we can see that every single field is not defined for the resolvers. it is going to simply read an attribute from an object and defining the resolver for that field can be avoided. The id resolver can also be removed .

DataLoader and Avoiding N+1 Queries

There is an unknown problem with the resolvers used . The SQL query needes to be run for every object or with additional number of queries. There is a great article great article on this problem which helps to resollve the queries.

The step used firstly involves defining of the loader. A loader is used to collect IDs for loading in a single batch.

import DataLoader from "dataloader";

const loader = {
  artist: new DataLoader(ids =>
    db
      .table("artists")
      .whereIn("id", ids)
      .select()
      .then(rows => ids.map(id => rows.find(row => row.id === id)))
  )
};

 loader is passed  to our GraphQL resolvers, see below:

const apolloServer = new ApolloServer({
  typeDefs,
  resolvers,
  context: () => {
    return { loader };
  }
});

It allows us to update the  resolver to utilize the DataLoader:

const resolvers = {
  //...
  Album: {
    id: (album, _args, _context) => album.id,
    artist: (album, _args, { loader }) => {
      return loader.artist.load(album.artist_id);
    }
  }
  //...
};

The problem is solved as the end result shows a single query for database for loading all objects at once.. N+1 issue is resolved..

Conclusion

In this article, we were able to create a GraphQL server with CORS support, loading data from Postgres, and stomping out N+1 performance issues using DataLoader. Not bad for a day’s work! The next step might involve adding mutations along with some authentication to our app, enabling users to create and modify data with the correct permissions. Next.js is no longer just for the frontend (as you can see). It has first-class support for server endpoints and the perfect place to put your GraphQL API.

Failure of Monitor and production of slow GraphQL requests

GraphQL h some debugging features which debugs request and responses, without effecting the production app. If one can ensure request related to networks then third party services can be used. try LogRocket.

https://logrocket.com/signup/

LogRocket is used for web apps like a DVR, it records everything that is happening on the site. Instead of finding the reason of problems , we can aggregate and report on problematic GraphQL requests to quickly understand the root cause.

We can also track Apollo client state and inspect GraphQL queries’ and instruments like key-value pairs.LogRocket to record baseline performance timings like loading time of page, timimg of first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.

GraphQL , TypeScript and PostgreSQL API

Introduction

The most popular stack these days is GraphQl and Typescript. i used Vanilla JavaScript in one of my recent project but i have used Typescript many times. I never used this but i followed a tutorial which helped me alot so i thought of guiding others too. Before starting let us see:

This image has an empty alt attribute; its file name is nnnutbfl.jpg

Why GraphQL, TypeScript and PostgreSQL ?:

The description in our API is provided by GraphQL. It helps in understanding the needs of the clients and helps us while dealing with large amount of data , as one can have all the data by running only one query.

Typescript is used as a super set of java script. When java script code takes more compliance time and become messier to reuse or maintain we can use typescript instead.

PostgreSQl is based on personal preference and is opensource. you can view the following link for ore details.

https://www.compose.com/articles/what-postgresql-has-over-other-open-source-sql-databases/

Preconditions

  1. yarn NPM can be used
  2. node: v.10 or superior
  3. postgresql = 12
  4. basic typescript knowledge

Structure of folder

project is structured in following way:

graphql_api/
       ...
        dist/
          bundle.js
        src/
         database/
              knexfile.ts
              config.ts
              migrations/
              models/
                User.ts
                Pet.ts
          __generated__/
          schema/
              resolvers/
                  user.ts
                  pet.ts
                  index.ts

              graphql/
                  schema.ts
              index.ts/
          index.ts 

Dependencies

  • Apollo server: it is open source graphsql server maintained by the community. It works by using node.js and HTTP frameworks.
  • Objection: sequelize can also be used but objection.js is better because it is an ORM that embraces SQL.

Development

  • Webpack : webpack can be used to compile JavaScript modules, node.js do not accept files like .gql or .graphql , that’s why we use webpack. install the following
yarn add graphql apollo-server-express express body-parser objection pg knex

and some dependencies of dev:

yarn add -D typescript @types/graphql @types/express @types/node  graphql-tag concurrently nodemon ts-node webpack webpack-cli webpack-node-external

Configuration

use command tsconfig

{
  "compilerOptions": {
  "target": "es5",                          /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'. */
    "module": "commonjs",                     /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', 'es2020', or 'ESNext'. */
                      /* Concatenate and emit output to single file. */
     "outDir": "dist",                        /* Redirect output structure to the directory. */
     "rootDir": "src",                       /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
   
    "strict": true,                           /* Enable all strict type-checking options. */
 
     "moduleResolution": "node",            /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */
  
 "skipLibCheck": true,                     /* Skip type checking of declaration files. */
    "forceConsistentCasingInFileNames": true  /* Disallow inconsistently-cased references to the same file. */
  },
  "files": ["./index.d.ts"]
}

Webpack

const path = require('path');
const {CheckerPlugin} = require('awesome-typescript-loader');
var nodeExternals = require('webpack-node-externals');

module.exports = {
  mode: 'production',
  entry: './src/index.ts',
  target:'node',
  externals: [nodeExternals(),{ knex: 'commonjs knex' }],
  output: {
    path: path.resolve(__dirname, 'dist'),
    filename: 'bundle.js'
  },
  resolve: {
    extensions: [ ".mjs",'.js', '.ts','.(graphql|gql)'],
    modules: [
        
        'src',
    ]
},
  module:{
      rules:[
        {
            test: /\.(graphql|gql)$/,
            exclude: /node_modules/,
            loader: 'graphql-tag/loader'
        },
        {
            test: /\.ts$/,
            exclude: /node_modules/,
            loaders: 'awesome-typescript-loader'
        }
      ]
  },
  plugins:[
    new CheckerPlugin(),
  ]
  
};

Hello World example

add the following script to the package.json file:

"scripts":{
     "dev": "concurrently \" nodemon ./dist/bundle.js \" \" webpack --watch\" "
}

index.ts

import express, { Application } from 'express';
import {  ApolloServer , Config } from 'apollo-server-express';


const app: Application  = express();

const schema = `
    type User{
        name: String
    }
    type Query {
        user:User
    }
`
const config : Config = {
    typeDefs:schema,
    resolvers : {
        Query:{
            user:(parent,args,ctx)=>{
                return { name:"WOnder"}
            }
        }
    },
    introspection: true,//these lines are required to use the gui 
    playground: true,//   of playground

}

const server : ApolloServer = new ApolloServer(config);

server.applyMiddleware({
    app,
    path: '/graphql'
  });

app.listen(3000,()=>{
    console.log("We are running on http://localhost:3000/graphql")
})

Server config

we will use , Executable schema from graphql-tools. It allow us to generate GraphQLSchema and allow to join the types or resolvers from large number of files.

src/index.ts

...
const config : Config = {
    schema:schema,// schema definition from schema/index.ts
    introspection: true,//these lines are required to use  
    playground: true,//     playground

}

const server : ApolloServer = new ApolloServer(config);

server.applyMiddleware({
    app,
    path: '/graphql'
  });
...

schema/index.ts

import { makeExecutableSchema} from 'graphql-tools';
import schema from './graphql/schema.gql';
import {user,pet} from './resolvers';

const resolvers=[user,pet];

export default makeExecutableSchema({typeDefs:schema, resolvers: resolvers as any});

Database

lets see the database diagram including a registry of users and their pets.

Migration file

for the creation of databse in postgres we use migration files of knex

require('ts-node/register');

module.exports = {
  development:{
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
      },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    },
    timezone: 'UTC'
  },
  testing:{
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
      },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    },
    timezone: 'UTC'
  },
  production:{
    client: 'pg',
    connection: {
        database: "my_db",
        user: "username",
        password: "password"
      },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    },
    timezone: 'UTC'
  }
};

a first migration running file will be created:

npx knex --knexfile ./src/database/knexfile.ts migrate:make -x ts initial

and the migration file seems like this

import * as Knex from "knex";


export async function up(knex: Knex): Promise<any> {
    return knex.schema.createTable('users',(table:Knex.CreateTableBuilder)=>{
        table.increments('id');
        table.string('full_name',36);
        table.integer('country_code');
        table.timestamps(true,true);

    })
    .createTable('pets',(table:Knex.CreateTableBuilder)=>{
        table.increments('id');
        table.string('name');
        table.integer('owner_id').references("users.id").onDelete("CASCADE");
        table.string('specie');
        table.timestamps(true,true);
    })
}


export async function down(knex: Knex): Promise<any> {
}

press run for migration file

npx knex --knexfile ./src/database/knexfile.ts migrate:latest

now there are two tables in database and we need models for each table to execute queries, src/database/models:

import {Model} from 'objection';
import {Species,Maybe} from '../../__generated__/generated-types';

import User from './User';

class Pet extends Model{
    static tableName = "pets";
    id! : number;
    name?: Maybe<string>;
    specie?: Maybe<Species>; 
    created_at?:string;
    owner_id!:number;
    owner?:User;

    static jsonSchema ={
        type:'object',
        required:['name'],

        properties:{
            id:{type:'integer'},
            name:{type:'string', min:1, max:255},
            specie:{type:'string',min:1, max:255},
            created_at:{type:'string',min:1, max:255}
        }
    };

    static relationMappings=()=>({
        owner:{
            relation:Model.BelongsToOneRelation,
            modelClass:User,
            join: {
                from: 'pets.owner_id',
                to: 'users.id',
              }
        }
    });

    
};

export default Pet;


import {Model} from 'objection';
import {Maybe} from '../../__generated__/generated-types';
import Pet from './Pet';



class User extends Model{
    static tableName = "users";
    id! : number;
    full_name!: Maybe<string>;
    country_code! : Maybe<string>;
    created_at?:string;
    pets?:Pet[];

    static jsonSchema = {
        type:'object',
        required:['full_name'],

        properties:{
            id: { type:'integer'},
            full_name:{type :'string', min:1, max :255},
            country_code:{type :'string', min:1, max :255},
            created_at:{type :'string', min:1, max :255}
        }
    }

    static relationMappings =()=>({
        pets: {
            relation: Model.HasManyRelation,
           modelClass: Pet,
            join: {
              from: 'users.id',
              to: 'pets.owner_id'
            }
          }
    })
}

export default User;

now we instantiate Knex and provide the instance to Objection

import dbconfig from './database/config';
const db = Knex(dbconfig["development"]);

Model.knex(db);

SCHEMA

enum Species{
    BIRDS,
    FISH,
    MAMMALS,
    REPTILES
}

type User {
    id: Int!
    full_name: String
    country_code: String
    created_at:String
    pets:[Pet]
}

type Pet {
    id: Int!
    name: String
    owner_id: Int!
    specie: Species
    created_at:String
    owner:User
}

input createUserInput{
    full_name: String!
    country_code: String!
}

input createPetInput{
    name: String!
    owner_id: Int!
    specie: Species!
}

input updateUserInput{
    id:Int!
    full_name: String
    country_code: String
}


input updatePetInput{
    id:Int!
    name: String!
}

type Query{
    pets:[Pet]
    users:[User]
    user(id:Int!):User
    pet(id:Int!):Pet
}

type Mutation{
    createPet(pet:createPetInput!):Pet
    createUser(user:createUserInput!):User
    deletePet(id:Int!):String
    deleteUser(id:Int!):String
    updatePet(pet:updatePetInput!):Pet
    updateUser(user:updateUserInput!):User
}

generating types

below packages are required for better type safeting the resolvers :

 yarn add -D @graphql-codegen/cli @graphql-codegen/typescript @graphql-codegen/
typescript-resolvers @graphql-codegen/typescript-operations 

create the config file for generate types :

/codegen.yml
overwrite: true
schema: "http://localhost:3000/graphql"
documents: null
generates:
  src/__generated__/generated-types.ts:
    config:
      mappers:
        User:'./src/database/User.ts'
        UpdateUserInput:'./src/database/User.ts'
        Pet:'./src/database/Pet.ts'
    plugins:
      - "typescript"
      - "typescript-resolvers"

add below script to packages.json :

...
"generate:types": "graphql-codegen --config codegen.yml"
...

when server is up , then run :

yarn run generate:types

for generation of types from graphql read from here, it is highly suggested

resolvers

schema/resolvers/

import {Pet,User} from '../../database/models';
import {Resolvers} from '../../__generated__/generated-types';
import {UserInputError} from 'apollo-server-express';


const resolvers : Resolvers = {
    Query:{
        pet:async (parent,args,ctx)=>{
            const pet:Pet= await Pet.query().findById(args.id);

             return pet;          
        },
        pets: async (parent,args,ctx)=>{
            const pets:Pet[]= await Pet.query();

            return pets;

        }
    },
    Pet:{
        owner:async(parent,args,ctx)=>{
            const owner : User = await Pet.relatedQuery("owner").for(parent.id).first();

            return owner;
        }
    },
    Mutation:{
        createPet:async (parent,args,ctx)=>{
            let pet: Pet;
            try {
                 pet  = await Pet.query().insert({...args.pet});
               
            } catch (error) {
                throw new UserInputError("Bad user input fields required",{
                    invalidArgs: Object.keys(args),
                  });
                
            }
            return pet;
        },
        updatePet:async (parent,{pet:{id,...data}},ctx)=>{
            const pet : Pet = await Pet.query()
                                    .patchAndFetchById(id,data);

            return pet;
        },
        deletePet:async (parent,args,ctx)=>{
            const pet = await Pet.query().deleteById(args.id);
            return "Successfully deleted"
        },
    }
}


export default resolvers;
import { Resolvers} from '../../__generated__/generated-types';
import {User,Pet} from '../../database/models';
import {UserInputError} from 'apollo-server-express';

interface assertion {
    [key: string]:string | number ;
}

type StringIndexed<T> = T &amp; assertion;

const resolvers : Resolvers ={
    Query:{
        users: async (parent,args,ctx)=>{
            const users : User[] = await User.query();
            return users;
        },
        user:async (parent,args,ctx)=>{
            const user :User = await await User.query().findById(args.id);

           return user;
        },
    },
    User:{
        pets:async (parent,args,ctx)=>{
            const pets : Pet[] = await User.relatedQuery("pets").for(parent.id);

            return pets;
        }
        
    },
    Mutation:{
        createUser:async (parent,args,ctx)=>{
            let user : User;
            try {
                user = await User.query().insert({...args.user});
            } catch (error) {
                console.log(error);
               throw new UserInputError('Email Invalido', {
                   invalidArgs: Object.keys(args),
                 });
            }
            return user;
        },
        updateUser:async (parent,{user:{id,...data}},ctx)=>{

            let user : User = await User.query().patchAndFetchById(id,data);

            return user;

        },
        deleteUser:async (parent,args,ctx)=>{
            const deleted = await User.query().deleteById(args.id);
            return "Succesfull deleted";
        },

    }
}


export default resolvers;

this will help to execute all the operations defined before

BONUS

two errors can be seen

its not bad to have errors , i prefer not to have errors, after this the first error is resolved by splitting knexfile.ts then put the required configuration for knex in a separate file.

const default_config = {
    client: 'pg',
    connection: {
        database: "db",
        user: "user",
        password: "password"
      },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: 'migrations'
    },
    timezone: 'UTC'
  }
  interface KnexConfig {
    [key: string]: object;
  };
  const config : KnexConfig = {
    development:{
      ...default_config
    },
    testing:{
      ...default_config
    },
    production:{
      ...default_config
    }
  };

  export default config;
require('ts-node/register');
import config from './config';


module.exports= config["development"]

the second got resolved from importing from schema and taking help from this  useful post . now we should have working in our own graphql api

CONCLUSION

yayy! now we have a GraphQL API . So we have learnt generating types for Typescript from graphql and solving issues. I hope you got help from this tutorial .I’ll be posting more soon. Give suggestions in comment box. thankyou.


How TYPESCRIPT can be used with NODE and EXPRESS?

In the start of my career , i was a web developer and i noticed that strong typed languages were dominant in market. I was impressed with new upcoming features in java script and python language. The idea of not declaring the variables made me more productive and i used to like my work. the first time i heard about Typescript the idea seemed like taking steps back to old days.

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

what changed my mind?

I had a strategy that, for individual projects I used simpler languages like java script but while working in a team or working on large scale I preferred using typescript. in this article i will try to explain the reason.

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

TYPESCRIPT

If you don’t have any know how of typescript i recommend to read the following overview:

https://www.tutorialspoint.com/typescript/typescript_overview.htm

and view the following: https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html

Comparison of Productivity and Maintenance

Typescript is an object oriented language used for app development. This means that the setup of project is remunerated with the maintenance of large scale projects. Lets see why it is so:

type safe= less errors

When we define the type in our code we allow the IDE to find out the errors and bugs while using the functions and classes that would be viewed by us at runtime.

Example:

function add (a: number , b: number):
number { return a+b;}
let mySum: number;
mySum=add(1,"two");
let myText: String;
myText=add(1,2);

I have used visual studio code and there is an error i.e “Two”

on line 4: error as string parameter is passed to a function that can only accept number

on line 6: error as assign to a string , the output gives a number .

these two errors would have gone unnoticed without typescript, showing bugs at runtime.

Project Modules exposed by IDE

There are hundreds of classes which are scattered across different files. When we declare types the IDE finds the origin files and relate the objects with them.

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express
Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

Programmers face difficulty in maintaining this language that is why they avoid transferring from java and C# to js but typescript helps in overcoming this obstacle.

Setting up express project with Typescript

lets see the steps taken to set up project in Express.js project :

npm init

install typescript package

npm install typescript -s

Typescript node package

Typescript cant be run in node.js engine rather it only runs java script. The node typescript package is used as transpiler for .ts files to .js scripts. Babel is used as typescript for trans-piling. however the market standard is to use Microsoft package for this purpose.

the script tsc is put inside the package.json :

“scripts”: {
“tsc”: “tsc”
},

This can be used to call typescript functions by using command line. below command is used:

npm run tsc — –init

this help in initializing the typescript by creating the file tsconfig.json . In this file we will remove comment i.e  outDiroption and choose transpile location to deliever .js files :

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

Express.js installation

npm install express -s

As typescript dont know express class type so express and typescript are independent. A special npm package can be used to recognize express class:

npm install @types/express -s

HELLO WORLD

Below is an example:

https://expressjs.com/pt-br/starter/hello-world.html

var express=require( 'express' ) ; 
 var app = express(); 
app.get( '/'function (req,res){
res . send( 'Hello World! ' ) ;
});
 app.1isten(3000, function ( )
 { 
console. log( ' Example app listening on port 3000! ')
});

now create folder app and inside it create file app.ts using following code:

// lib/app.ts
import express = require('express');

// Create a new express application instance
const app: express.Application = express();

app.get('/', function (req, res) {
  res.send('Hello World!');
});

app.listen(3000, function () {
  console.log('Example app listening on port 3000!');
});

COMPILATION OF FIRST APPLICATION

npm run tsc

this command creates a build folder and .js file:

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

Running EXPRESS

node build/app.js

on port 3000 we can see our output:

Full Stack Sofware Developer.,Full Stack Aws Serverless Application Developer., Full-Stack Developer.,Javascript Developer.,typescript with node and express

Transcript running by not using transpiling

Typescript can be run directly on the node by using the package of ts-node . it is used for development purpose . For final deployment or embedding use java script code. version .The ts-node is included as dependent component on another package, tts-node-dev. After the installation of ,ts-node-dev we should run commands to restart server whenever any change is made in original project files .

npm install ts-node-dev -s

some more scripts are added in package.json:

“scripts”: {
“tsc”: “tsc”,
“dev”: “ts-node-dev –respawn –transpileOnly ./app/app.ts”,
“prod”: “tsc && node ./build/app.js”
},

start dev environment :

npm run dev

run server :

npm run prod