In the recent years massive adoption rates of dockers have made the security an important point to consider for firms which are using these containers for the development and production of different things.Containers are complex when compared to virtual machines or other deployed technologies. The process to secure docker containers are also very complex.
We will take a view of docker security container and explain the reason behind the complexity of docker container. We will discuss the default environments for better security and practices to monitor containers for security.
Following is the complete guide for container security:
Many organisations used virtual machines or bare-metal servers before Docker to host applications. These technologies are quite simple when seen from a security perspective. When hardening your development and monitoring for security relevant events you need to focus on just two layers. As APIs, overlay networks or complex software defined storage configuration are not a major part of virtual machine or bare metal developments so you do not have to worry about these.
A typical Docker environment has many moving parts hence its security is much more complicated. Those moving parts include:
Probably you have multiple Docker container images and individual micro services will be hosted by each one of your containers. Also probably multiple intances of each imagine will be running at a time. Proper security and monitoring will be required for these intances and images.
To keep the containers and its host safe, the Docker daemon need to be secured.
Bare metal or virtual machine might be the host server.
Another layer to secure is service like ECS if you use it to host your containers.
Communication between containers is facilitated by APIs and Overlay networks.
Other storage system that exists externally from your containers is Data volume.
And if you are thinking that learning to secure Docker is tough because dockers security is undoubtely much more complex than any other security system.
Best practices of Docker container security:
Luckily we can overcome the challenges. this article is not a tiring guide to security of docker but you can use this official Docker documentation),as a reference. Below are some best practices:
#1 setting of reference quotes
One easy thing in docker is configuring of resource quotas. Resource quotas helps us to limit the memory amount and resources of cpu which is consumed by the container.
This is helpful for many reasons. It helps to keep the environment of docker efficient and saves one container from mixing with other system resources. It also increases the security by saving the container from using large space or resources so that it gets prevented from any harmful activity.
Resources quotas are easily set by use of commands. View this Docker documentation.
#2 Root should not be run
We all know the feeling when we are tired and dont want to get entangled in problems related to permission setting to get an application work properly so running in root is the only option left so you dont worry about issues related to permission restrictions.
if you are a beginner it is sometimes okay to use Docker testing environment but there is no reason good enough to let a Docker container run with roof permissions in production.
Because Docker doesn’t run containers as root by default so this is an easy docker security to be followed. So you don’t have to make amendments to prevent running as a root by default in a default configuration. letting a container as a root is a temptation that needs to be resisted as it is more convenient in some situations.
If you use kubernetes to orchestrate your containers for added Docker security, you can explicitly prevent containers from starting as root. We can use MustRunAsNonRoot directive in a pod security policy.
#3 Secure container registeries
Docker is powerful because of the container registeries.It makes it easy to set central repositories which helps us in downloading the container images.
Using the container registries is a security risk if one does not know the evaluation of the security constraints.We can use Docker Trusted Registry which can be installed in the firewalls to eradicate the risk of viruses.
The registry can be accessed from the back of firewalls and we can limit the unknown access of uploading and downloading images from our registry. Using role based access can control explicitly of unknown users or access.It is nice to leave our registry open to others but it is useful only if it stops the access of viruses and harmful things.
#4 Use of trusted and secure images
We should be sure that the the images or container images we use are from a trusted source. This is obvious but there are many platforms from where we can download images and they might not be trusted or verified.
One should consider not using public container registries or try to use official trusted repositories, like the ones on Docker Hub.
One can use image scanning tools which help to identify harmful sources . Mostupper level containerhave embedded scanning tools. The ones like Clair.
#5 Identify the source of your code
Docker images contain some original code and packages from upstream sources. sometimes the image downloaded can come from a trusted registry, the image can have packages from untrusted sources. these unknown packages can be made up of code taken from multiple outside sources.
That is why analysis tools are important. Downloading the sources of the Docker images and scanning the code origin we can know if any of the code is from unknown sources.
#6 network security and API
As we have seen above Docker containers depend on APIs and networks for communication. It is important to make sure that your APIs and network architectures are secure and monitoring the APIs and network activity for any unusual activity must also be checked.
As APIs and networks are not a part of Docker and are resources of Dockers so steps for securing APIs and networks are not included in this article. But it is important to check the security of the sources.
In Conclusion
Docker is a complex concept and having no simple trick for maintaining Docker container security. But one has to think carefully about steps to secure your Docker containers, and strengthen your container environment at many levels. This is the only way to ensure that you can have all the benefits of Docker containers without having major security issues.
Next.js is thought of as a frontend React framework. It provides server-side contribution, built-in routing process and many features related to performance. As Next.js supports the API routes, so it provides the backend and frontend to React, in the same package and setup.
We will learn in this article, how to use the API routes in setting up a GraphQL API inside the Next.js app. It starts with basic setup and some concepts of CORS, loading of data from Postgres using the Knex package, thus improves the performance by using DataLoader package and avoids costly queries of N+1.
To set up Next.js run the command npx create-next-app. You can install npx by this command npm i -g npx , it installs it globally on your system.
An example can be used to setup Next.js with a GraphQL API:
npx create-next-app --example api-routes-graphql.
Addition of an API route
With Next.js setup, we’re going to add an API (server) route to our app. This is as easy as creating a file within the pages/api folder called graphql.js. For now, its contents will be:
Definition of type which helps in describing the schema of GraphSQL.
Creation of resolvers: having ability of responding a query or any sudden change.
creation of Apollo server.
creation of handler which helps to join things to Next.js API for requesting the response from lifecycle.
Now we import the gql function from apollo-server-micro , we can define our defining types that describes the schema of GraphQL server.
import { ApolloServer, gql } from "apollo-server-micro";
const typeDefs = gql`
type Query {
hello: String!
}
`;
As our schema is defined now , we can now write the code which will enable the server in answering the queries and sudden changes. This is called resolver and every field require function which will produce results. The resolver function gives result which align with the defined types.
The arguments received by resolver functions are:
parent: it is ignored on query level.
arguments: They would be passed in our resolver function and will allow us to access the field argument.
context: it is a global state and tells about the authenticated user or the global instance and dataLoader.
By using apolloServer one can access a handler, helps in handling the request and response lifecycle. One more config which is needed to be exported, which stops the body of incoming HTTP requests from being parsed, and required for GraphQL to work correctly:
If we start or limit cross-origin requests by the use of CORS, we can add the micro-cors package to enable this:
import Cors from "micro-cors";
const cors = Cors({
allowMethods: ["POST", "OPTIONS"]
});
export default cors(handler);
In the above case the cross-origin HTTP method is limited to POST and OPTIONS. It changes the default export to have the handler pass to cors function.
Postgres , Knex with Dynamic data
Complex coding can be boring in many ways… now is the time to load from existing database. So for this a setup is needed to be installed and the required package is:
yarn add knex pg.
Now create a knexfile.js file, configure Knex so it helps in connecting to our database, The ENV variable is used for knowing the connection of database. You can have a look at the article setting up secrets. The local ENV variable looks like:
Now we can see that every single field is not defined for the resolvers. it is going to simply read an attribute from an object and defining the resolver for that field can be avoided. The id resolver can also be removed .
DataLoader and Avoiding N+1 Queries
There is an unknown problem with the resolvers used . The SQL query needes to be run for every object or with additional number of queries. There is a great article great article on this problem which helps to resollve the queries.
The step used firstly involves defining of the loader. A loader is used to collect IDs for loading in a single batch.
import DataLoader from "dataloader";
const loader = {
artist: new DataLoader(ids =>
db
.table("artists")
.whereIn("id", ids)
.select()
.then(rows => ids.map(id => rows.find(row => row.id === id)))
)
};
loader is passed to our GraphQL resolvers, see below:
The problem is solved as the end result shows a single query for database for loading all objects at once.. N+1 issue is resolved..
Conclusion
In this article, we were able to create a GraphQL server with CORS support, loading data from Postgres, and stomping out N+1 performance issues using DataLoader. Not bad for a day’s work! The next step might involve adding mutations along with some authentication to our app, enabling users to create and modify data with the correct permissions. Next.js is no longer just for the frontend (as you can see). It has first-class support for server endpoints and the perfect place to put your GraphQL API.
Failure of Monitor and production of slow GraphQL requests
GraphQL h some debugging features which debugs request and responses, without effecting the production app. If one can ensure request related to networks then third party services can be used. try LogRocket.
LogRocket is used for web apps like a DVR, it records everything that is happening on the site. Instead of finding the reason of problems , we can aggregate and report on problematic GraphQL requests to quickly understand the root cause.
We can also track Apollo client state and inspect GraphQL queries’ and instruments like key-value pairs.LogRocket to record baseline performance timings like loading time of page, timimg of first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
The most popular stack these days is GraphQl and Typescript. I used Vanilla JavaScript in one of my recent projects but I have used Typescript many times. I never used this but I followed a tutorial which helped me a lot so I thought of guiding others too. Before starting let us see:
Why GraphQL, TypeScript and PostgreSQL ?:
The description in our API is provided by GraphQL. It helps in understanding the needs of the clients and helps us while dealing with large amounts of data, as one can have all the data by running only one query.
Typescript is used as a superset of javascript. When javascript code takes more compliance time and becomes messier to reuse or maintain we can use typescript instead.
PostgreSQL is based on personal preference and is open-source. you can view the following link for more details.
Webpack: Webpack can be used to compile JavaScript modules, node.js do not accept files like .gql or .graphql, that’s why we use Webpack. install the following
import express, { Application } from 'express';
import { ApolloServer , Config } from 'apollo-server-express';
const app: Application = express();
const schema = `
type User{
name: String
}
type Query {
user:User
}
`
const config : Config = {
typeDefs:schema,
resolvers : {
Query:{
user:(parent,args,ctx)=>{
return { name:"WOnder"}
}
}
},
introspection: true,//these lines are required to use the gui
playground: true,// of playground
}
const server : ApolloServer = new ApolloServer(config);
server.applyMiddleware({
app,
path: '/graphql'
});
app.listen(3000,()=>{
console.log("We are running on http://localhost:3000/graphql")
})
Server config
we will use, Executable schema from Graphql-tools. It allows us to generate GraphQLSchema and allow us to join the types or resolvers from a large number of files.
src/index.ts
...
const config : Config = {
schema:schema,// schema definition from schema/index.ts
introspection: true,//these lines are required to use
playground: true,// playground
}
const server : ApolloServer = new ApolloServer(config);
server.applyMiddleware({
app,
path: '/graphql'
});
...
schema/index.ts
import { makeExecutableSchema} from 'graphql-tools';
import schema from './graphql/schema.gql';
import {user,pet} from './resolvers';
const resolvers=[user,pet];
export default makeExecutableSchema({typeDefs:schema, resolvers: resolvers as any});
Database
Let’s see the database diagram including a registry of users and their pets.
Migration file
for the creation of a database in Postgres we use migration files of knew
import { Resolvers} from '../../__generated__/generated-types';
import {User,Pet} from '../../database/models';
import {UserInputError} from 'apollo-server-express';
interface assertion {
[key: string]:string | number ;
}
type StringIndexed<T> = T & assertion;
const resolvers : Resolvers ={
Query:{
users: async (parent,args,ctx)=>{
const users : User[] = await User.query();
return users;
},
user:async (parent,args,ctx)=>{
const user :User = await await User.query().findById(args.id);
return user;
},
},
User:{
pets:async (parent,args,ctx)=>{
const pets : Pet[] = await User.relatedQuery("pets").for(parent.id);
return pets;
}
},
Mutation:{
createUser:async (parent,args,ctx)=>{
let user : User;
try {
user = await User.query().insert({...args.user});
} catch (error) {
console.log(error);
throw new UserInputError('Email Invalido', {
invalidArgs: Object.keys(args),
});
}
return user;
},
updateUser:async (parent,{user:{id,...data}},ctx)=>{
let user : User = await User.query().patchAndFetchById(id,data);
return user;
},
deleteUser:async (parent,args,ctx)=>{
const deleted = await User.query().deleteById(args.id);
return "Succesfull deleted";
},
}
}
export default resolvers;
this will help to execute all the operations defined before
BONUS
two errors can be seen
It’s not bad to have errors, I prefer not to have errors, after this the first error is resolved by splitting knexfile.ts then put the required configuration for Knex in a separate file.
require('ts-node/register');
import config from './config';
module.exports= config["development"]
the second got resolved from importing from the schema and taking help from this useful post. now we should have to work on our own Graphql API
CONCLUSION
yay! now we have a GraphQL API. So we have learned generating types for Typescript from Graphql and solving issues. I hope you got help from this tutorial. I’ll be posting more soon. Give suggestions in the comment box. thankyou.
<Modal showCloseButton>
<title>Modal title</title>
<contents>Modal body text goes here.</contents>
<dismissButton onClick={close}>Close</dismissButton>
<actionButton> onClick={save}>Save</actionButton>
</Modal>
The times are near when we would like to pass props and would want to control the behavior of child elements. For example, in the picture given below, you can see there are different elements:
A title section that is written at the top.
A cross button to close the action present at the top right corner.
A content area where you can write some text.
A button to save the changes, “Save changes”.
A dismiss button to close it, “Close”.
Now, if we want to reuse the Modal properly, we must modify the modal and its elements. This means that users will get control over things such as the content which is displayed, the dispatched events, the style of the content, and all other elements of the Modal that will be under the control of the user. Users can select a naïve solution for the acceptance of proposed for each element like they are:
<Modal
showCloseButton
showDismissButton
showActionButton
title="Modal title"
contents="Modal body text goes here."
dismissButtonText="Close"
actionButtonText="Save changes"
handleDismiss={close}
handleAction={save}
/>
The problem with this kind of approach is that it spams the mechanism of propos, and this makes the whole component look less readable and more inflated. While on the one hand, it is less attractive, at the other end, it limits the amount of propos that users can pass to child elements. This method also prevents users from having complete control over the elements. However, you can solve this problem through another way by providing generic props objects where every prop object is representing a different element:
However, this solution works fine, but it does not solve the issue of spam, and you are completely abusing the syntactic sugar that JSX provides you in this way. You are obligated to use JSONs style attributes instead of using the HTML style attribute assignments (attr= “value”).
Using Bootstrap for the rescue
They take a very shrewd approach in Bootstrap, and that is instead of defining props all over the places, they gave us the capability to manipulate the Modal’s children directly. Now, we can achieve this intended functionality by using the dedicated components that Bootstrap was aiming for:
<Modal.Dialog>
<Modal. Header closeButton>
<Modal.Title>Modal title</Modal.Title>
</Modal.Header>
<Modal.Body>
<p>Modal body text goes here.</p>
</Modal.Body>
<Modal.Footer>
<Button variant="secondary" onClick={close}>
Close
</Button>
<Button variant="primary" onClick={save}>
Save changes
</Button>
</Modal.Footer>
</Modal.Dialog>
As you can see, the progress is there, and things are getting better, but we can take it to the next level.
Although things are very declarative and clear and we can stay here, but we are still obligated to compose an entire modal to the next level. But this would mean that we would not be able to use the Modal’s children to cover up the missing pieces, as we implemented the part of the logic before. Normally, it is not like that we have to write the Modal’s content from scratch.
Moreover, you can use it as a template for your use in future cases. However, there is no filter or restriction on the children’s input, and you can use it the way you want. But normally, we would like it if the user uses only a few limited elements so that he does not mess up the things, and if that is the case, we must see what could be the right approach to follow it.
Introducing the design pattern that contains everything
Now, if we see our last progress and calculate what we gathered so far, the new pattern should have the following attributes:
Has complete control over child elements using props.children.
Has a template already in place for the user.
No spamming of the mechanism of props.
Has restrictions on the input.
Well, this sounds good and promising so let’s take a look at an example and we will be using the Bootstrap Modal component it as an anchor:
Now you can clearly view that the new modal component uses a hook known as “useChildProps(). Moreover, this hook will basically flatten nested propos by going through the ‘props.children’. Furthermore, when it will come to make sure that right names are addressed, this method will validate them against a provided white list.
However, we know it will cause confusion with native HTML tag names, but this can be said about any other React component being used by other users or us. But if you see the changing trends, such as the introduction of component-based user-interface, for example, Angular, Vue, React, or other web components, so new tag names are not rare now. Therefore, you should go for the new design and must not stay afraid of getting hands on it.
If you need to hire Remote React App Developer then you can contact me.
In the start of my career , i was a web developer and i noticed that strong typed languages were dominant in market. I was impressed with new upcoming features in java script and python language. The idea of not declaring the variables made me more productive and i used to like my work. the first time i heard about Typescript the idea seemed like taking steps back to old days.
what changed my mind?
I had a strategy that, for individual projects I used simpler languages like java script but while working in a team or working on large scale I preferred using typescript. in this article i will try to explain the reason.
TYPESCRIPT
If you don’t have any know how of typescript i recommend to read the following overview:
Typescript is an object oriented language used for app development. This means that the setup of project is remunerated with the maintenance of large scale projects. Lets see why it is so:
type safe= less errors
When we define the type in our code we allow the IDE to find out the errors and bugs while using the functions and classes that would be viewed by us at runtime.
Example:
function add (a: number , b: number):
number { return a+b;}
let mySum: number;
mySum=add(1,"two");
let myText: String;
myText=add(1,2);
I have used visual studio code and there is an error i.e “Two”
on line 4: error as string parameter is passed to a function that can only accept number
on line 6: error as assign to a string , the output gives a number .
these two errors would have gone unnoticed without typescript, showing bugs at runtime.
Project Modules exposed by IDE
There are hundreds of classes which are scattered across different files. When we declare types the IDE finds the origin files and relate the objects with them.
Programmers face difficulty in maintaining this language that is why they avoid transferring from java and C# to js but typescript helps in overcoming this obstacle.
Setting up express project with Typescript
lets see the steps taken to set up project in Express.js project :
npm init
install typescript package
npm install typescript -s
Typescript node package
Typescript cant be run in node.js engine rather it only runs java script. The node typescript package is used as transpiler for .ts files to .js scripts. Babel is used as typescript for trans-piling. however the market standard is to use Microsoft package for this purpose.
the script tsc is put inside the package.json :
“scripts”: { “tsc”: “tsc” },
This can be used to call typescript functions by using command line. below command is used:
npm run tsc — –init
this help in initializing the typescript by creating the file tsconfig.json . In this file we will remove comment i.e outDiroption and choose transpile location to deliever .js files :
Express.js installation
npm install express -s
As typescript dont know express class type so express and typescript are independent. A special npm package can be used to recognize express class:
var express=require( 'express' ) ;
var app = express();
app.get( '/'function (req,res){
res . send( 'Hello World! ' ) ;
});
app.1isten(3000, function ( )
{
console. log( ' Example app listening on port 3000! ')
});
now create folder app and inside it create file app.ts using following code:
// lib/app.ts
import express = require('express');
// Create a new express application instance
const app: express.Application = express();
app.get('/', function (req, res) {
res.send('Hello World!');
});
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
COMPILATION OF FIRST APPLICATION
npm run tsc
this command creates a build folder and .js file:
Running EXPRESS
node build/app.js
on port 3000 we can see our output:
Transcript running by not using transpiling
Typescript can be run directly on the node by using the package of ts-node . it is used for development purpose . For final deployment or embedding use java script code. version .The ts-node is included as dependent component on another package, tts-node-dev. After the installation of ,ts-node-dev we should run commands to restart server whenever any change is made in original project files .