Docker
volumes</br>
Mount host paths or named volumes, specified as sub-options to a service.
You can mount a host path as part of a definition for a single service, and there is no need to define it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-level volumes key. Use named volumes with services, swarms, and stack files.
Note: The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format. See Use volumes and Volume Plugins for general information on volumes.
This example shows a named volume (mydata) being used by the web service, and a bind mount defined for a single service (first path under db service volumes). The db service also uses a named volume called dbdata (second path under db service volumes), but defines it using the old string format for mounting a named volume. Named volumes must be listed under the top-level volumes key, as shown.
version: "3.2"
services:
web:
image: nginx:alpine
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
db:
image: postgres:latest
volumes:
- "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
- "dbdata:/var/lib/postgresql/data"
volumes:
mydata:
dbdata:
Note: See Use volumes and Volume Plugins for general information on volumes.</br>
Syntax: volumes is simply a string, don’t put space after column, simply - "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
or - "dbdata:/var/lib/postgresql/data"
Choose between named volumes, mount volues and volume as a container
It’s best to not try to access files inside /var/lib/docker directly where named volumes stored. Those directories are meant to be managed by the docker daemon, and not to be messed with.
To access the data inside a volume, there’s a number of options;
- use a bind-mounted directory (you considered that, but didn’t fit your use case).</br>
- use a “service” container that uses the same volume and makes it accessible through that container, for example a container running ssh (to use scp) or a SAMBA container (such as svendowideit/samba)</br>
- use a volume-driver plugin. there’s various plugins around that offer all kind of options. For example, the local persist plugin is a really simple plug-in that allows you to specify where docker should store the volume data (so outside of /var/lib/docker)</br>
[Dockerfile]
build an image which uses proxy of the company to download new components online via pip
FROM python:2
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt --proxy=http://corporate.com:80
COPY . .
EXPOSE 4000
CMD ["python", "app.py"]
VOLUME [ "/AvailableAnalytics" ]
pipreqs /home/project/location generates file requirements.txt
Werkzeug==0.12.2
numpy==1.12.1
Flask==0.12.2
docker-compoes.yml
version : '3.2'
services:
analytics:
image: "analytics"
ports:
- "4000:4000"
volumes:
- analytics-data:/usr/src/app/AvailableAnalytics
volumes:
analytics-data:
app.py
@app.route('/')
def home():
return 'Analytics'
if __name__ == "__main__":
(flask_hostname, flask_port, flask_debug) = get_rest_information()
app.run(
host=flask_hostname,
port=flask_port,
debug=flask_debug)
env
environment
Add environment variables. You can use either an array or a dictionary. Any boolean values; true, false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the YML parser.
Environment variables with only a key are resolved to their values on the machine Compose is running on, which can be helpful for secret or host-specific values.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Note: If your service specifies a build option, variables defined in environment are not automatically visible during the build. Use the args sub-option of build to define build-time environment variables.
From https://docs.docker.com/compose/compose-file/#environment
Setting Variables in Dockerfiles
So, how do you set stuff so it’s available to future running containers? ENV. Take a look at this Dockerfile snippet:
(Dockerfile snippet)
ENV foo /bar # or ENV foo=/bar
ADD . $foo # or ADD . ${foo}
# which translates to: ADD . /bar
or declare env as below
(Dockerfile snippet)
ARG some_variable_name
ENV env_var_name=$some_variable_name
ARG DB_APP=postgres_app
ENV DB_APP=$DB_HOST
Passing Environment Variables From the Host Into a Container(overwrite ???)
Unless, you don’t specify the value of the environmant variable in the command line, but just the name:
$ docker run -e env_var_name alpine env
or pass new env for this container via docker-compose
(docker-compose file snippet)
version : '3.2'
services:
app:
image: "app"
ports:
- "3000:3000"
environment:
- 'DB_APP=postgres://postgres:password@10.0.2.15:5432/app'
From https://vsupalov.com/docker-env-vars/
ARGS
https://vsupalov.com/docker-env-vars/
Add build arguments, which are environment variables accessible only during the build process.
First, specify the arguments in your Dockerfile:
ARG buildno
ARG password
RUN echo "Build number: $buildno"
RUN script-requiring-password.sh "$password"
$ docker build --build-arg some_variable_name=a_value
From https://vsupalov.com/docker-env-vars/
Then specify the arguments under the build key. You can pass either a mapping or a list:
build:
context: .
args:
buildno: 1
password: secret
build:
context: .
args:
- buildno=1
- password=secret
You can omit the value when specifying a build argument, in which case its value at build time is the value in the environment where Compose is running.
args:
- buildno
- password
Note: YAML boolean values (true, false, yes, no, on, off) must be enclosed in quotes, so that the parser interprets them as strings.
From https://docs.docker.com/compose/compose-file/#args
Save and load images following commands were executed:
docker save --output latestversion-1.0.0.tar dockerregistry/latestversion:1.0.0
copy the image to a server, and import is as follows:
docker load --input latestversion-1.0.0.tar