Remote Python Developer : Improve Your Python Development skill

Models are a core concept of the Django framework. According to Django’s design philosophies for models, we should be as explicit as possible with the naming and functionality of our fields, and ensure that we’re including all relevant functionality related to our model in the model itself, rather than in the views or somewhere else. If you’ve worked with Ruby on Rails before, these design philosophies won’t seem new as both Rails and Django implement the Active Record pattern for their object-relational mapping (ORM) systems to handle stored data.

In this post we’ll look at some ways to leverage these philosophies, core Django features, and even some libraries to help make our models better.

getter/setter/deleter properties

As a feature of Python since version 2.2, a property’s usage looks like an attribute but is actually a method. While using a property on a model isn’t that advanced, we can use some underutilized features of the Python property to make our models more powerful.

If you’re using Django’s built-in authentication or have customized your authentication using AbstractBaseUser, you’re probably familiar with the last_login field defined on the User model, which is a saved timestamp of the user’s last login to your application. If we want to use last_login, but also have a field named last_seen saved to a cache more frequently, we could do so pretty easily.

First, we’ll make a Python property that finds a value in the cache, and if it can’t, it returns the value from the database.


Note: I’ve slimmed the model down a bit as there’s a separate tutorial on this blog about specifically customizing the built-in Django user model.

The property above checks our cache for the user’s last_seen value, and if it doesn’t find anything, it will return the user’s stored last_login value from the model. Referencing <instance>.last_seen now provides a much more customizable attribute on our model behind a very simple interface.

We can expand this to include custom behavior when a value is assigned to our property (some_user.last_seen = some_date_time), or when a value is deleted from the property (del some_user.last_seen).

Now, whenever a value is assigned to our last_seen property, we save it to the cache, and when a value is removed with del, we remove it from the cache. Using setter and deleter is described in the Python documentation but is rarely seen in the wild when looking at Django models.You may have a use case like this one, where you want to store something that doesn’t necessarily need to be persisted to a traditional database, or for performance reasons, shouldn’t be. Using a custom property like the above example is a great solution.

In a similar use case, the python-social-auth library, a tool for managing user authentication using third-party platforms like GitHub and Twitter, will create and manage updating information in your database based on information from the platform the user logged-in with. In some cases, the information returned won’t match the fields in our database. For example, the python-social-auth library will pass a fullname keyword argument when creating the user. If, perhaps in our database, we used full_name as our attribute name then we might be in a pinch.

A simple way around this is by using the getter/setter pattern from above:

Now, when python-social-auth saves a user’s fullname to our model (new_user.fullname = 'Some User'), we’ll intercept it and save it to our database field, full_name, instead.

through model relationships

Django’s many-to-many relationships are a great way of handling complex object relationships simply, but they don’t afford us the ability to add custom attributes to the intermediate models they create. By default, this simply includes an identifier and two foreign key references to join the objects together.

Using the Django ManyToManyField through parameter, we can create this intermediate model ourselves and add any additional fields we deem necessary.

If our application, for example, not only needed users to have memberships within groups, but wanted to track when that membership started, we could use a custom intermediate model to do so.


In the example above, we’re still using a ManyToManyField to handle the relationship between a user and a group, but by passing the Membershipmodel using the through keyword argument, we can now add our joinedcustom attribute to the model to track when the group membership was started. This through model is a standard Django model, it just requires a primary key (we use UUIDs here), and two foreign keys to join the objects together.

Using the same three model pattern, we could create a simple subscription database for our site:

Here we’re able to track when a user first subscribed, when they updated their subscription, and if we added the code paths for it, when a user canceled their subscription to our application.

Using through models with the ManyToManyField is a great way to add more data to our intermediate models and provide a more thorough experience for our users without much added work.

Proxy models

Normally in Django, when you subclass a model (this doesn’t include abstract models) into a new class, the framework will create new database tables for that class and link them (via OneToOneField) to the parent database tables. Django calls this “multi-table inheritance” and it’s a great way to re-use existing model fields and structures and add your own data to them. “Don’t repeat yourself,” as the Django design philosophies state.

Multi-table inheritance example:

This example would create both vehicles_vehicle and vehicles_airplanedatabase tables,linked with foreign keys. This allows us to leverage the existing data that lives inside vehicles_vehicle, while adding our own vehicle specific attributes to each subclass, vehicle_airplane, in this case.

In some use cases, we may not need to store extra data at all. Instead, we could change some of the parent model’s behavior, maybe by adding a method, property, or model manager. This is where proxy models shine. Proxy models allow us to change the Python behavior of a model withoutchanging the database.


Proxy models are declared just like normal models. In our example, we tell Django that Honda is a proxy model by setting the proxy attribute of the Honda Meta class to True. I’ve added a property and a method stub example, but you can see we’ve added a custom model manager to our Honda proxy model.

This ensures that whenever we request objects from the database using our Honda model, we get only Car instances back where model= 'Honda'. Proxy models make it easy for us to quickly add customization on top of existing models using the same data. If we were to delete, create, or update any Car instance using our Honda model or manager, it would be saved into the vehicles_car database just as if we were using the parent (Car) class.

Wrap up

If you’re already comfortable working in Python classes, then you’ll feel right at home with Django’s models: inheritance, multiple inheritance, method overrides, and introspection. These models are all part of how the Django object-relational mapper was designed.

Multi-table inheritance and manually defining intermediate tables for SQL joins aren’t necessarily basic concepts, but are implemented simply with a bit of Django and Python knowhow. Being able to leverage features of the language and framework alongside one another is one of the reasons Django is a popular web framework.

For further reading, check out Django’s documentation topic for models.

Let’s Know How To Structure Large Flask Applications Step by Step


There are so many methods and conventions for structuring Python web applications. Although certain frameworks are shipped with tools (for scaffolding) to automate – and ease – the task (and the headaches), almost all solutions rely on packaging / modularization applications as the code-base gets distributed [logically] across related files and folders.

The minimalist web application development framework Flask, has its own – blueprints.

Here, we are going to see how to create an application directory, and structure it to work with re-usable components created with Flask’s blueprints. These pieces allow the maintenance and the development of application components greatly.


1. Flask: The Minimalist Application Development Framework

2. Our Choices In This Article

3. Preparing The System For Flask

  • Prepare The Operating System
  • Setting up Python, pip and virtualenv

4. Structuring The Application Director

  • Creating Application Folder
  • Creating A Virtual Environment
  • Creating Application Files
  • Installing Flask

5. Working With Modules And Blueprints (Components)

  • Module Basics
  • Module Templates

6. Creating The Application (,, etc.)

  • Edit using nano
  • Edit using nano

7. Creating A Module / Component

  • Step 1: Structuring The Module
  • Step 2: Define The Module Data Model(s)
  • Step 3: Define Module Forms
  • Step 4: Define Application Controllers (Views)
  • Step 5: Set Up The Application in “app/”
  • Step 6: Create The Templates
  • Step 7: See Your Module In Action

Flask: The Minimalist Application Development Framework

Flask is a minimalist (or micro) framework which refrains from imposing the way critical things are handled. Instead, Flask allows the developers to use the tools they desire and are familiar with. For this purpose, it comes with its own extensions index and a good amount of tools already exist to handle pretty much everything from log-ins to logging.

It is not a strictly “conventional” framework and relies partially on configuration files, which frankly make many things easier when it comes to getting started and keeping things in check.

Our Choices In This Article

As we have just been over in the previous section, Flask-way of doing things involves using the tools you are most comfortable with. In our article, we will be using – perhaps – the most common (and sensible) of choices in terms of extensions and libraries (i.e. database extraction layer). These choices will involve:

  • SQLAlchemy (via Flask-SQLAlchemy)
  • WTForms (via Flask-WTF)


Adds SQLAlchemy support to Flask. Quick and easy.

This is an approved extension.


Flask-WTF offers simple integration with WTForms. This integration includes optional CSRF handling for greater security.

This is an approved extension.

Preparing The System For Flask

Before we begin structuring a large Flask application, let’s prepare our system and download (and install) Flask distribution.

Note: We will be working on a freshly instantiated droplet running the latest version of available operating systems (i.e. Ubuntu 13). You are highly advised to test everything on a new system as well – especially if you are actively serving clients.

Prepare The Operating System

In order to have a stable server, we must have all relevant tools and libraries up-to-date and well maintained.

To ensure that we have the latest available versions of default applications, let’s begin with updates.

Run the following for Debian Based Systems (i.e. Ubuntu, Debian):

To get the necessary development tools, install “build-essential” using the following command:

Setting up Python, pip and virtualenv

On Ubuntu and Debian, a recent version of Python interpreter – which you can use – comes by default. It leaves us with only a limited number of additional packages to install:

  • python-dev (development tools)
  • pip (to manage packages)
  • virtualenv (to create isolated, virtual

Note: Instructions given here are kept brief. To learn more, check out our how-to article on pip and virtualenv: Common Python Tools: Using virtualenv, Installing with Pip, and Managing Packages.


pip is a package manager which will help us to install the application packages that we need.

Run the following commands to install pip:


It is best to contain a Python application within its own environment together with all of its dependencies. An environment can be best described (in simple terms) as an isolated location (a directory) where everything resides. For this purpose, a tool called virtualenv is used.

Run the following to install virtualenv using pip:

Structuring The Application Directory

We will use the exemplary name of LargeApp as our application folder. Inside, we are going to have a virtual environment (i.e. env) alongside the application package (i.e. app) and some other files such as “” for running a test (development) server and “” for keeping the Flask configurations.

The structure – which is given as an example below – is highly extensible and it is built to make use of all helpful tools Flask and other libraries offer. Do not be afraid when you see it, as we explain everything step by step by constructing it all.

Target example structure:

Creating Application Folders

Let’s start with creating the main folders we need.

Run the following commands successively to perform the task:

Our current structure:

Creating A Virtual Environment

Using a virtual environment brings with it a ton of benefits. You are highly suggested to use a new virtual environment for each one of of your applications. Keeping the virtualenv folder inside your application’s is a good way of keeping things in order and tidy.

Run the following to create a new virtual environment with pip installed.

Creating Application Files

In this step, we will form the basic application files before moving on to working with modules and blueprints.

Run the following to create basic application files:

Our current structure:

Installing Flask And Application Dependencies

Once we have everything in place, to begin our development with Flask, let’s download and install it using pip.

Run the following to install Flask inside the virtual environment env.

Note: Here we are downloading and installing Flask without activating the virtual environment. However, given that we are using the pip from the environment itself, it achieves the same task. If you are working with an activated environment, you can just use pip instead.

And that’s it! We are now ready to build a larger Flask application modularized using blueprints.

Working With Modules And Blueprints (Components)

Module Basics

At this point, we have both our application structure set up and its dependencies downloaded and ready.

Our goal is to modularize (i.e. create re-usable components with Flask’s blueprints) all related modules that can be logically grouped.

An example for this can be an authentication system. Having all its views, controllers, models and helpers in one place, set up in a way that allows reusability makes this kind of structuring a great way for maintaining applications whilst increasing productivity.

Target example module (component) structure (inside /app):

# Our module example here is called *mod_auth*
# You can name them as you like as long as conventions are followed

    |-- ..
    |-- .

Module Templates

To support modularizing to-the-max, we will structure the “templates” folder to follow the above convention and contain a new folder – with the same or a similar, related name as the module – to contain its template files.

Target example templates directory structure (inside LargeApp):

Creating The Application

In this section, we will continue on the previous steps and start with actual coding of our application before moving onto creating our first modularized component (using blueprints): mod_auth for handling all authentication related procedures (i.e. signing-in, signing-up, etc).

Edit “” using nano

Place the contents:

Save and exit using CTRL+X and confirm with with Y.

Edit “” using nano

Place the contents:

# Statement for enabling the development environment
DEBUG = True

# Define the application directory
import os
BASE_DIR = os.path.abspath(os.path.dirname(__file__))  

# Define the database - we are working with
# SQLite for this example
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(BASE_DIR, 'app.db')

# Application threads. A common general assumption is
# using 2 per available processor cores - to handle
# incoming requests using one and performing background
# operations using the other.

# Enable protection agains *Cross-site Request Forgery (CSRF)*

# Use a secure, unique and absolutely secret key for
# signing the data. 

# Secret key for signing cookies
SECRET_KEY = "secret"

Save and exit using CTRL+X and confirm with with Y.

Creating A Module / Component

This section is the first major step that defines the core of this article. Here, we will see how to use Flask’s blueprints to create a module (i.e. a component).

What’s brilliant about this is the offered portability and reusability of your code, combined with ease of maintenance – for which you will be thankful in the future as often it is quite bit of a struggle to come back and understand things as they were left.

Step 1: Structuring The Module

As we have set out to do, let us create our first module’s (mod_auth) directories and files to start working on them.

After these operations, this is how the folder structure should look like:

Step 2: Define The Module Data Model(s)

Place the below self-explanatory – exemplary – contents:

# Import the database object (db) from the main application module
# We will define this inside /app/ in the next sections.
from app import db

# Define a base model for other database tables to inherit
class Base(db.Model):

    __abstract__  = True

    id            = db.Column(db.Integer, primary_key=True)
    date_created  = db.Column(db.DateTime,  default=db.func.current_timestamp())
    date_modified = db.Column(db.DateTime,  default=db.func.current_timestamp(),

# Define a User model
class User(Base):

    __tablename__ = 'auth_user'

    # User Name
    name    = db.Column(db.String(128),  nullable=False)

    # Identification Data: email & password
    email    = db.Column(db.String(128),  nullable=False,
    password = db.Column(db.String(192),  nullable=False)

    # Authorisation Data: role & status
    role     = db.Column(db.SmallInteger, nullable=False)
    status   = db.Column(db.SmallInteger, nullable=False)

    # New instance instantiation procedure
    def __init__(self, name, email, password):     = name    = email
        self.password = password

    def __repr__(self):
        return '<User %r>' % ( 

Save and exit using CTRL+X and confirm with with Y.

Step 3: Define Module Forms

Place the below self-explanatory – exemplary – contents:

Save and exit using CTRL+X and confirm with with Y.

Step 4: Define Application Controllers (Views)

Place the below self-explanatory – exemplary – contents:

# Import flask dependencies
from flask import Blueprint, request, render_template, \
                  flash, g, session, redirect, url_for

# Import password / encryption helper tools
from werkzeug import check_password_hash, generate_password_hash

# Import the database object from the main app module
from app import db

# Import module forms
from app.mod_auth.forms import LoginForm

# Import module models (i.e. User)
from app.mod_auth.models import User

# Define the blueprint: 'auth', set its url prefix: app.url/auth
mod_auth = Blueprint('auth', __name__, url_prefix='/auth')

# Set the route and accepted methods
@mod_auth.route('/signin/', methods=['GET', 'POST'])
def signin():

    # If sign in form is submitted
    form = LoginForm(request.form)

    # Verify the sign in form
    if form.validate_on_submit():

        user = User.query.filter_by(

        if user and check_password_hash(user.password,

            session['user_id'] =

            flash('Welcome %s' %

            return redirect(url_for('auth.home'))

        flash('Wrong email or password', 'error-message')

    return render_template("auth/signin.html", form=form)

Save and exit using CTRL+X and confirm with with Y.

Step 5: Set Up The Application in “app/”

Place the contents:

# Import flask and template operators
from flask import Flask, render_template

# Import SQLAlchemy
from flask.ext.sqlalchemy import SQLAlchemy

# Define the WSGI application object
app = Flask(__name__)

# Configurations

# Define the database object which is imported
# by modules and controllers
db = SQLAlchemy(app)

# Sample HTTP error handling
def not_found(error):
    return render_template('404.html'), 404

# Import a module / component using its blueprint handler variable (mod_auth)
from app.mod_auth.controllers import mod_auth as auth_module

# Register blueprint(s)
# app.register_blueprint(xyz_module)
# ..

# Build the database:
# This will create the database file using SQLAlchemy

Save and exit using CTRL+X and confirm with with Y.

Step 6: Create The Templates

Place the contents:

{% macro render_field(field, placeholder=None) %}
{% if field.errors %}
{% elif field.flags.error %}
{% else %}
{% endif %}
    {% set css_class = 'form-control ' + kwargs.pop('class', '') %}
    {{ field(class=css_class, placeholder=placeholder, **kwargs) }}
{% endmacro %}

    <legend>Sign in</legend>
    {% with errors = get_flashed_messages(category_filter=["error"]) %}
    {% if errors %}
    {% for error in errors %}
    {{ error }}<br>
    {% endfor %}
    {% endif %}
    {% endwith %}

    {% if form.errors %}
    {% for field, error in form.errors.items() %}
    {% for e in error %}
    {{ e }}<br>
    {% endfor %}
    {% endfor %}
    {% endif %}
    <form method="POST" action="." accept-charset="UTF-8" role="form">
      {{ form.csrf_token }}
      {{ render_field(, placeholder="Your Email Address",
                                  autofocus="") }}
      {{ render_field(form.password, placeholder="Password") }}
        <input type="checkbox" name="remember" value="1"> Remember Me
      <a role="button" href="">Forgot your password?</a><span class="clearfix"></span>
      <button type="submit" name="submit">Sign in</button>

Save and exit using CTRL+X and confirm with with Y.

Step 7: See Your Module In Action

After having created our first module, it is time to see everything in action.

Run a development server using the

This will initiate a development (i.e. testing) server hosted at port 8080.

Visit the module by going to the URL:

Although you will not be able to login, you can see it in action by entering some exemplary data or by testing its validators.

Challenges and Best Practices of Docker s Container security

In the recent years massive adoption rates of dockers have made the security an important point to consider for firms which are using these containers for the development and production of different things.Containers are complex when compared to virtual machines or other deployed technologies. The process to secure docker containers are also very complex.

We will take a view of docker security container and explain the reason behind the complexity of docker container. We will discuss the default environments for better security and practices to monitor containers for security.

Following is the complete guide for container security:

Challenges faced by dockers container security:

Many organisations used virtual machines or bare-metal servers before Docker to host applications. These technologies are quite simple when seen from a security perspective. When hardening your development and monitoring for security relevant events you need to focus on just two layers. As APIs, overlay networks or complex software defined storage configuration are not a major part of virtual machine or bare metal developments so you do not have to worry about these.

A typical Docker environment has many moving parts hence its security is much more complicated. Those moving parts include:

  • Probably you have multiple Docker container images and individual micro services will be hosted by each one of your containers. Also probably multiple intances of each imagine will be running at a time. Proper security and monitoring will be required for these intances and images.
  • To keep the containers and its host safe, the Docker daemon need to be secured.
  • Bare metal or virtual machine might be the host server.
  • Another layer to secure is service like ECS if you use it to host your containers.
  • Communication between containers is facilitated by APIs and Overlay networks.
  • Other storage system that exists externally from your containers is Data volume.

And if you are thinking that learning to secure Docker is tough because dockers security is undoubtely much more complex than any other security system.

Best practices of Docker container security:

Luckily we can overcome the challenges. this article is not a tiring guide to security of docker but you can use this official Docker documentation),as a reference. Below are some best practices:

#1 setting of reference quotes

One easy thing in docker is configuring of resource quotas. Resource quotas helps us to limit the memory amount and resources of cpu which is consumed by the container.

This is helpful for many reasons. It helps to keep the environment of docker efficient and saves one container from mixing with other system resources. It also increases the security by saving the container from using large space or resources so that it gets prevented from any harmful activity.

Resources quotas are easily set by use of commands. View this Docker documentation.

#2 Root should not be run

We all know the feeling when we are tired and dont want to get entangled in problems related to permission setting to get an application work properly so running in root is the only option left so you dont worry about issues related to permission restrictions.

if you are a beginner it is sometimes okay to use Docker testing environment but there is no reason good enough to let a Docker container run with roof permissions in production.

Because Docker doesn’t run containers as root by default so this is an easy docker security to be followed. So you don’t have to make amendments to prevent running as a root by default in a default configuration. letting a container as a root is a temptation that needs to be resisted as it is more convenient in some situations.

If you use kubernetes to orchestrate your containers for added Docker security, you can explicitly prevent containers from starting as root. We can use MustRunAsNonRoot directive in a pod security policy.

#3 Secure container registeries

Docker is powerful because of the container registeries.It makes it easy to set central repositories which helps us in downloading the container images.

Using the container registries is a security risk if one does not know the evaluation of the security constraints.We can use Docker Trusted Registry  which can be installed in the firewalls to eradicate the risk of viruses.

The registry can be accessed from the back of firewalls and we can limit the unknown access of uploading and downloading images from our registry. Using role based access can control explicitly of unknown users or access.It is nice to leave our registry open to others but it is useful only if it stops the access of viruses and harmful things.

#4 Use of trusted and secure images

We should be sure that the the images or  container images  we use are from a trusted source. This is obvious but there are many platforms from where we can download images and they might not be trusted or verified.

One should consider not using public container registries or try to use official trusted repositories, like the ones on Docker Hub.

One can use image scanning tools which help to identify harmful sources . Mostupper level containerhave embedded scanning tools. The ones like Clair.

#5 Identify the source of your code

Docker images contain some original code and packages from upstream sources. sometimes the image downloaded can come from a trusted registry, the image can have packages from untrusted sources. these unknown packages can be made up of code taken from multiple outside sources.

That is why analysis tools are important. Downloading the sources of the Docker images and scanning the code origin we can know if any of the code is from unknown sources.

#6 network security and API

As we have seen above Docker containers depend on APIs and networks for communication. It is important to make sure that your APIs and network architectures are secure and monitoring the APIs and network activity for any unusual activity must also be checked.

As APIs and networks are not a part of Docker and are resources of Dockers so steps for securing APIs and networks are not included in this article. But it is important to check the security of the sources.

In Conclusion

Docker is a complex concept and having no simple trick for maintaining Docker container security. But one has to think carefully about steps to secure your Docker containers, and strengthen your container environment at many levels. This is the only way to ensure that you can have all the benefits of Docker containers without having major security issues.