Default python version to python3 in Ubuntu

I wanted to set set default python version to 3 instead of 2 that is default on Ubuntu 18. I tried add alias to .bashrc like:

alias python=python3

The disadvantage of editing .bashrc file is that it will not work while using the commands with sudo. I am running my Ubuntu image via Docker as root so it is disregarding this alias.

Good and easy way out from this is to run command

update-alternatives --install /usr/bin/python python /usr/bin/python3 10

This gives a priority of 10 for the path of python3.

So, if you are running this on Docker just add

RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 10

To you Dockerfile.

 

Advertisements

App Engine vs Cloud Functions

Both Cloud Functions (CFs) and Google App Engine (GAE) are designed to build “microservice architecture” in the “serveless” environment.

Google says that Cloud Functions is basically for SERVERLESS FUNCTIONS & EVENTS where as App Engine is for SERVERLESS HTTP APPLICATIONS. However,when I read this short description I am still confused since if I am running SPA application what prevents me to use just CFs for my serverside code? When exactly would use GAE instead of CFs?

I made a small investigation on this and here are my findings.

Little bit more longer description from Google:

Cloud Functions

An event-driven compute platform to easily connect and extend Google and third-party cloud services and build applications that scale from zero to planet scale.

Use Cases

  • Asynchronous backend processing
  • Simple APIs (like one or two functions, not RESTful stuff)
  • Rapid prototyping and API stitching

 

 

App Engine standard environment

A fully managed serverless application platform for web and API backends. Use popular development languages without worrying about infrastructure management.

Use Cases

  • Web applications
  • API’s like Mobile and SPA backends

 

 

 

I found this answer from StackOverflow which I am updating here with few of my edits.

When creating relatively complex applications, CFs have several disadvantages compared to GAE.

  • Limited to Node.JS, Python, and Go. GAE supports also .NET, Ruby, PHP, Java.
  • CFS is designed for lightweight, standalone pieces of functionality, attempting to build complex applications using such components quickly becomes “awkward”. Yes, the inter-relationship context for every individual request must be restored on GAE just as well, only GAE benefits from more convenient means of doing that which aren’t available on CFs. For example user session management, as discussed in other comments
  • GAE apps have an app context that survives across individual requests, CFs don’t have that. Such context makes access to certain Google services more efficient/performant (or even plain possible) for GAE apps, but not for CFs. For example memcached.
  • the availability of the app context for GAE apps can support more efficient/performant client libraries for other services which can’t operate on CFs. For example accessing the datastore using the ndb client library (only available for standard env GAE python apps) can be more efficient/performant than using the generic datastore client library.
  • GAE can be more cost effective as it’s “wholesale” priced (based on instance-hours, regardless of how many requests a particular instance serves) compared to “retail” pricing of CFs (where each invocation is charged separately)
  • response times might be typically shorter for GAE apps than CFs since typically the app instance handling the request is already running, thus:
    • the GAE app context doesn’t need to be loaded/restored, it’s already available, CFs need to load/restore it
    • the handling code is (most of the time) already loaded, CFs’ code still needs to be loaded. Not to sure about this one, tho, I guess it depends on the underlying implementation.

Note that nothing prevents us from mixing both notions. An AppEngine application can launch jobs through cloud functions

Summary

Use Cloud Functions (CFs) for “tasks” and use Google App Engine (GAE) for “full applications”.

 

 

Essential Docker Compose Commands

Launch in background

docker-compose up -d

If you want to rebuild docker images you can use –build flag after up command. This is essentially same as you would write:

# docker build .
# docker run myimage

docker-compose up –build

Stop containers

docker-compose down

List running images on containers

docker-compose ps

Tagging docker images

Normally with “docker build . ” you get docker id that you can run with “docker run DOCKERID” but if you want a bit more friendly name you can tag it like this:

docker build -t YOURDOCKERUSERNAME/PROJECT:latest .

After that you can refer to the image with the tag instead of id like this

docker run -p 8081:8081 YOURDOCKERUSERNAME/PROJECT

Create, Run and Delete Container from Dockerfile

First, lets make a simple “hello world” that runs and outputs nodejs command from the container.

STEP 1

Create folder and put following files on the folder:

Dockerfile

# Specify a base image
FROM node:alpine
WORKDIR ‘/app’
# Install some dependencies
COPY ./package.json ./
RUN npm install
COPY ./ ./
# Default command
CMD [“npm”, “start”]
package.json
{
  “dependencies”: {
    “express”: “*”
  },
  “scripts”: {
    “start”: “node index.js”
  }
}
index.js
const express = require(‘express’);
const app = express();
app.get(‘/’, (req, res) => res.send(‘Hello World!’))
app.listen(8081, () => {
  console.log(‘Listening on port 8081’);
});
This will create a simple webserver that is listening port 8081 spitting out “Hello world!”
STEP 2
Build Docker Image and Run it
docker build .
This will create an image from the Dockerfile to your computer.
Tip: You can have multiple configurations, for example if you have different configuration for local development. Then use -f -flag to point to that like this: docker build -f Dockerfile.dev .
On the previous command docker created an image for you and passed you the image ID. It looks something like this on the console:
Successfully built 6bf0f35fae69
Now, you need to take this image ID and run like this:
docker run 6bf0f35fae69
Docker container is now running but we created the web server. The host has no idea how to access to this container so we need to do some port mapping.
Stop the container with CTRL+C
Then run the same command but with port mapping
docker run -it -p 8081:8081 6bf0f35fae69
On the port parameter ports are mapped as host:container
STEP 3
View and delete container
docker ps -a
docker rm CONTAINERID
To remove all containers
docker rm $(docker ps -a -q)
Check existing dockers on your system: docker images

Essential Docker Commands

Containers

Use docker container my_command

create — Create a container from an image.
start — Start an existing container.
run — Create a new container and start it.
ps — List running containers.
inspect — See lots of info about a container.
logs — Print logs.
stop — Gracefully stop running container.
kill —Stop main process in container abruptly.
rm— Delete a stopped container.

Images

Check existing dockers on your system: docker images

Use docker image my_command

build — Build an image.
push — Push an image to a remote registry.
ls — List images.
history — See intermediate image info.
inspect — See lots of info about an image, including the layers.
rm — Delete an image.

Misc

docker version — List info about your Docker Client and Server versions.
docker login — Log in to a Docker registry.
docker system prune — Delete all unused containers, unused networks, and dangling images.