Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Runningln /dev/null /dev/raw1394inside the Dockerfile won't help you because/devis not part of the docker image. You can get around this by adding avolume mount. An exampleDockerfileanddocker-compose.ymlwould look like this:[fedora@myhost ~]$ cat Dockerfile FROM ubuntu:14.04 RUN apt-get update && \ apt-get install -y \ libdc1394-22-dev \ libdc1394-22 \ libdc1394-utils \ python-opencv && \ rm -rf /var/lib/apt/lists/* [fedora@myhost ~]$ cat docker-compose.yml version: '2' services: opencv: build: . command: python -c "import cv2; print cv2.__version__" volumes: - /dev/null:/dev/raw1394 [fedora@myhost ~]$ sudo docker-compose up Recreating fedora_opencv_1 Attaching to fedora_opencv_1 opencv_1 | 2.4.8 fedora_opencv_1 exited with code 0
I am running ubuntu 14.04 in a docker container and have opencv installed. Every time it runs I recieve the following error as described here:OpenCV: libdc1394 error: Failed to initialize libdc1394. The technique of linking /dev/null to the device file seems to work, but it is not persistent in the docker container, and even though I haveRUN ln /dev/null /dev/raw1394in my docker file if I run something likedocker-compose run bashthe error will persist in that session. What line can I add to my docker file that will get rid of this error message?
Open CV error failed to init raw1394 persisting in docker
Write a new entrypoint script.#!/bin/bash /bin/api "$@" # perhaps source /script.sh # or exec exec /script.shCopy this into your image on build.COPY --chmod=755 entrypoint.sh / ENTRYPOINT ["/entrypoint.sh"]This will result in the arguments you are passing in your compose file to be passed to /bin/api, and after that your script is executed.I am not sure if that would be actually useful. The API command looks like it's initiating a long-running process, so your script might never really run.You could also dome something like this in your entrypoint.#!/bin/bash run_script(){ sleep 5 source /script.sh } run_script & exec /bin/api "$@"It would hide errors from that script though, so its not really solid.This would run the function in the background, which would sleep 5 seconds to give the API time to start, and then run the script while the API is running.It's hard to say, without knowing what your script is doing, what would be a good solution. The second suggestion feels definitely a bit hacky.
I want to run 2 different commands for my service from docker-compose.bash script.shconfig /etc/config.yamlCurrently, my docker-compose looks like the below. I want the bash script to run after the config commanddocker-compose.yaml:services: API: build: . ports: - 8080:8080 environment: - "USER=${USER}" - "PASSWORD=${PASSWORD}" volumes: - ./conf/config.yaml:/etc/api.yaml command: ["-config", "/etc/api.yaml"]
How to run 2 different commands from docker-compose command:
This is because tmux is not aware that your terminal support UTF-8. Either:Add the-uflag to tmux (tmux -u newortmux -u attach); orMake sure your system is in a UTF-8 locale and the environment variables are set correctly (LANG,LC_ALLand so on).
I installed Parrot OS as a docker container and when I run tmux in it, theprompt stringchanges. It acts like an eyesore to an OCD person. I don't really know what the issue could be called exactly so can't mention the term, sorry for that.In the image, they are two different containers of Parrot OS, with tmux running in upper container. I think the issue is understandable from the image.
Docker tmux issue
Is FLASK_APP defined in the docker container? There's no ENV statement in the Dockerfile, and you didn't mention using docker's -e or --env command option. Your container won't inherit your environment variables from the hosting environment.# syntax=docker/dockerfile:1 FROM python:3.7.8 WORKDIR /tododocker COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt COPY . . # Add this: ENV FLASK_APP=App.py CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
I hope someone can help me with this. The following error code comes up when I try and run docker run todolist-docker.WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off Error: Could not locate a Flask application. You did not provide the "FLASK_APP" environment variable, and a "wsgi.py" or "app.py" module was not found in the current directory.My folder directory is here:Folder name: todoflaskappfinal __pycache__ static templates venv App.py Dockerfile requirements.txt todo.db ToDoList.dbwithin the todoflaskappfinal folder, I have a Dockerfile file:# syntax=docker/dockerfile:1 FROM python:3.7.8 WORKDIR /tododocker COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt COPY . . CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]And within App.py, I've set up everything (I assume) correctly, obviously with more code between this.#Website Configuration app = Flask(__name__) if __name__ == "__main__": app.run(debug=True, port=5000, host='0.0.0.0')I've set FLASK_APP as App.py, made the virtual environment with venv, etc. When I type inflask runin the terminal it loads the website up correctly and displays it on 127.0.0.01. However, when I try and use thedocker build --tag todolist-dockercommand and thendocker run todolist-docker, the error message appears above. Can anyone see what I'm doing wrong?
Could not locate a Flask application when creating a Docker Image
Sinatra was binding to the wrong interface. Fixed by adding the-oswitch.CMD ruby hei.rb -p 4567 -o 0.0.0.0
Updating the post with all files required to recreate the setup. – Still the same problem. Not able to access service running in container.FROM python:3 RUN apt-get update RUN apt-get install -y ruby rubygems RUN gem install sinatra WORKDIR /app ADD . /app/ EXPOSE 4567 CMD ruby hei.rb -p 4567hei.rbrequire 'sinatra' get '/' do 'Hello world!' enddocker-compose.ymlversion: '2' services: web: build: . ports: - "4567:4567"I'm starting the party by runningdocker-compose up --build .docker psreturns:0.0.0.0:4567->4567/tcpStill, no respons from port 4567. Testing with curl from the host machine.$ curl 127.0.0.1:4567 # and 0.0.0.0:4567localhost:4567 replies within the containter$ docker-compose exec web curl localhost:4567 Hello world!%`What should I do to be able to access the Sinatra app running on port 4567?
App running in Docker container on port 4567 can't be accessed from the outside
Try changing yourstartscript to:webpack-dev-server --host 0.0.0.0 --port 3000And your Dockerfile to:FROM node:lts-slim RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY . /usr/src/app/ EXPOSE 3000 CMD [ "npm", "start" ]Note: I highly advise against running your containers as root. You should always downgrade your user with the commandUSER ....SecurityAccording tothis Snyk's report, you are using a vulnerable base image in addition to running it as root. I highly recommend you usethis imageinstead. Furthermore, you should run your image as anon-root user:FROM node:13.8.0-alpine # don't run as root RUN addgroup -S app_group && adduser -S -G app_group app_user RUN mkdir -p /usr/src/app && chown app_user /usr/src/app WORKDIR /usr/src/app COPY --chown=app_user:app_group . /usr/src/app/ EXPOSE 3000 USER node CMD [ "npm", "start" ] USER app_user
I am trying to use docker-compose forreact-typescriptapplication withwebpack-dev-serverbelow is myDockerfileFROM node:lts-slim RUN mkdir -p /usr/src/app WORKDIR /usr/src/app EXPOSE 3000 CMD [ "npm", "start" ]"start": "webpack-dev-server --port 3000"thispackage.jsonlinedocker-compose.ymlversion: "3" services: frontend: container_name: awesome_web build: context: ./client dockerfile: Dockerfile image: webpack ports: - "3000:3000" volumes: - ./client:/usr/src/appI executed commanddocker-compose up --buildbased on logs application compiled successfullyoutput ofdocker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c88198ba996c webpack "docker-entrypoint.s…" 21 seconds ago Up 20 seconds 0.0.0.0:3000->3000/tcp awesome_webbut when I am trying to accesslocalhost:3000I am getting errorThis site can’t be reachedI am new to docker, following online blogs but I am not able to get why am I not able reach site?
This site can’t be reached docker-compose
I'm not clear how do you build the image successfully. But I recommend you take a look at the imagemcr.microsoft.com/windows/servercorein the docker hub. And it also gives out the available tagshere. There is no tag asltsc. You need to choose one in the available tags. For example,mcr.microsoft.com/windows/servercore:ltsc2019.Update:I can reproduce the same issue as you got, then I add the parameter--platform windowsand it works perfectly. I recommend you take a try. Hope it helps.
This works perfectly when I build locally, but to avoid having to move images to the azure container registry I am trying to build in the cloud.Dockerfile contains:FROM mcr.microsoft.com/windows/servercore:ltsc AS build-stage1 SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"] . . . Step 5/37 : FROM mcr.microsoft.com/windows/servercore:ltsc AS build-stage1 manifest for mcr.microsoft.com/windows/servercore:ltsc not found: manifest unknown: manifest unknown 2019/10/01 14:32:28 Container failed during run: build. No retries remaining. failed to run step ID: build: exit status 1 Run ID: ch1k failed after 7s. Error: failed during run, err: exit status 1And yet the same base image works fine when I build from a locally hosted powershell.
Unable to use mcr.microsoft.com/windows/servercore:ltsc when building from azure cloudshell (powershell)
There are multiple options to achieve this but the 2 most common ways are:Create a directory on your host to mount the dataCreate a docker volume to mount the data1) Create a data directory on a suitable volume on your host system, e.g./my/own/datadir. Start your mongo container like this:$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo:tagThe-v /my/own/datadir:/data/dbpart of the command mounts the/my/own/datadirdirectory from the underlying host system as/data/dbinside the container, where MongoDB by default will write its data files.Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:$ chcon -Rt svirt_sandbox_file_t /my/own/datadirThe source of this is theofficial documentation of the image.2) Another possibility is to use a docker volume.$ docker volume create my-volumeThis will create a docker volume in the folder/var/lib/docker/volumes/my-volume. Now you can start your container with:docker run --name some-mongo -v my-volume:/data/db -d mongo:tagAll the data will be stored in themy-volumeso in the folder/var/lib/docker/my-volume. So even when you delete your container and create a new mongo container linked with this volume your data will be loaded into the new container.You can also use the--restart=alwaysoption when you perform your initialdocker runcommand. This mean that your container automatically will restart after a reboot of your VM. When you've persisted your data too there will be no difference between your DB before or after the reboot.
I have installed the official MongoDB docker image in a VM on AWS EC2, and the database has already data on it. If I stop the VM (to save expenses overnight), will I lose all the data contained in the database? How can I make it persistent in those scenarios?
Stop VM with MongoDB docker image without losing data
The solution appears to have been to restore the shell colors to default, and restart all the relevant services. Because I am unsure as to what was preventing the default colors to fix the problem, the solution may require an OS reboot.
(Windows Git-bash) When I use git bash for terminal in a project for IntelliJ I have problems when I log into a docker container and usels. Text gets highlighted light blue and the color doesn't go away until I exit.Any thought on how to correct this? I suspect this comes from IntelliJ's recoloring of the shell colors. Perhaps there is a way to remove the influence of the Darkula theme colors?Here is what the same looks like on a normal OS panel:
(Windows Git-bash) IntelliJ git bash shell color scheme messed up with Docker
mysqlis officially supported by Docker.mysql/mysql-serveris officially supported by MySQL.It is your choice which organization you want to rely on for updates, support, etc.
I am trying to install MySQL in Docker.When I searched for a MySQL image in Docker Hub, it gave memysql. Which says it is "official" release.But when I went to the MySQL website,it saysthatmysql/mysql-serveris the official image.I am a bit confused which is the official and which one I should use. Of course Icanuse either but which oneshouldI use if I am concerned about using the official image.
Which docker image is the official MySQL image?
I don't see production.log printed out.Where are you looking? On your host or in the docker container that you are running?By default, the file will be created in your container filesystem.If you want it directly visible from your host (which runs the docker daemon), you would need tomount an host directory as a data volumefirst.docker run -d -P --name myapp -v /a/local/host/path:/path/to/log/in/container myimageThen you would seeproduction.login/a/local/host/path.
In my production environmentproduction.rb. I have configured my log will be saved to file:config.logger = Logger.new('log/production.log')When I run locally (start server by using command linerails s -e production). everything works fine. But when I run on docker environment, I don't seeproduction.logprinted out.Please help me about this problem.
Log doesn't save to file in docker environment
First, you need to build an executable file for the x86_64 (for that use an image with graalvm or mandrel for x86_64):./mvnw clean package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.builder-image=quay.io/quarkus/ubi-quarkus-graalvmce-builder-image:22.3.2-java17-amd64 -DskipTestsSecond, you need to build your image for x86_64:docker buildx build --platform linux/amd64 -f src/main/docker/Dockerfile.native -t your-image-amd64 .Note: build is slow.
I am trying to build Quarkus 2.8.0 for x86 platfom of native docker container from Apple M1 Macbook and deploy it in Linux amd64 Portainer. I was able to build the native image and when checking the filefile target/simple-app-1.0.0-SNAPSHOT-runnerthe output is:target/simple-app-1.0.0-SNAPSHOT-runner: Mach-O 64-bit executable x86_64And then I am building docker container usingDockerfile.native-microfile, and push to my local registry using this command:docker buildx build -t local-registry/repo/simple-app:latest-x86_64 -f src/main/docker/Dockerfile.native-micro --push --platform=linux/amd64 .The build process finished successfully without error or warning, and when I check in the local registry the container is created.The problem appear when I was trying to deploy the container in my Linux amd64 server with Portainer. The container is unable to start, and the log output is:standard_init_linux.go:219: exec user process caused: exec format error
Building quarkus native Linux/amd64 (x86_64) image from Apple M1 (Arm)
There is no inheritance type setup for Dockerfiles like you are suggesting.To implement a combined build you would need find the commonFROMancestor of thestandalone-firefoxandstandalone-chrome, which isselenium/node-baseand create your own Docker file to reapply all the build steps thatselenium/standalone-chromeapplies. Then keep it in sync whenever Selenium update their builds.Dockerfile Hierarchy:selenium/node-base / \ selenium/node-chrome selenium/node-firefox | | selenium/standalone-chrome selenium/standalone-firefoxThe problem is these builds have been designed to be seperate, so there is significant overlap in the variables and settings that the images use that you would also need to unpick in your custom build to control and run both chrome and firefox at the same time. You will probably end up having to do everything from scratch.Selenium GridRunning individualSelenium gridnode's behind agrid hubis the standard way to do multi browser testing from a single endpoint. You can runFirefox,ChromeorPhantom JSnodes in Docker or connectstandard nodesfrom anywhere else.Poor mans gridYou can always run a container for Chrome and Firefox on seperate ports and point the same test suite at a different port if setting up a Grid is a lot of work for the simple case of running some tests against each browser.
I have the following Dockerfile which will build a Selenium serverFROM selenium/standalone-firefox:3.4.0-chromium FROM selenium/standalone-chrome USER root ENV NODE_ENV test RUN mkdir -p /usr/local/cdt-tests/csv-data COPY ./csv-data /usr/local/cdt-tests/csv-data USER seluserobviously the two FROM statements is incorrect=> How can I create a Selenium server container that has both a Chrome driver and Firefox driver for Selenium. As far as I can tell, theselenium/standalone-firefox:3.4.0-chromiumimage only works for Firefox.
Create Dockerfile that includes Firefox and Chrome drivers for Selenium
Your shell is/bin/sh, butgvmputs its initialization in~/.bashrc, and expects/bin/bash.You need to source thegvminitialization scripts to run the commands from a non-interactive bash shell:RUN ["/bin/bash", "-c", ". /root/.gvm/scripts/gvm && gvm install go1.4 -B"] RUN ["/bin/bash", "-c", ". /root/.gvm/scripts/gvm && gvm use go1.4"]Or even better might be to put the commands you want to execute in a single bash script and add that to the image.#!/bin/bash set -e source /root/.gvm/scripts/gvm gvm install go1.4 gvm use go1.4
Problem:I'm attempting to create a Dockerfile that installs all the components to run Go, to installGVM (Go Version Management), and to install specific Go Versions.Error:When I try building the container with:docker build -t ##### .I get this error:/bin/sh: 1: gvm: not foundThe command '/bin/sh -c gvm install go1.4 -B' returned a non-zero code: 127Installed here:/root/.gvm/scripts/env/gvm /root/.gvm/scripts/gvm /root/.gvm/bin/gvmWhat I tried:It's clearly able to install GVM but unable to use it. Why? I thought maybe I needed to refresh the.bashrcor the.bash_profile... but that didn't work, since they don't exist.Dockerfile:FROM #####/##### #Installing Golang dependencies RUN apt-get -y install curl git mercurial make binutils bison gcc build-essential #Installing Golang RUN ["/bin/bash", "-c", "bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)"] #gvm does not exist here... why? RUN gvm install go1.4 -B RUN gvm use go1.4Question:Why does GVM not seem to be installed? How do I get rid of the error?
/bin/sh: 1: gvm: not found
What you need is add a load-balancer service, which then forwards 80/443 of the host to the container app/nginx/whatever.So navigate to your stack, click on add service -> load balancer. Then you can chose either for wich domain to trigger ( or catch all, which i would do for now ) and then which target. There you select your app-container and the port the container has its app / httpd server running and thats basically it
I need to set subdomains for apps in docker containers, not in internal rancher network but for public use. I have domain delegated to rancher server. And there is host property in almost all stacks from catalog, but it doesn't work. I guess i need to delegate domain using some rancher dns or setup nginx to proxy traffic to some rancher server but I can't find any.
Rancher external subdomains
Simple answer: it's not possible. Array lists are replaced on inheritance, thus when overriding, have to repeat all list statements.
I want mydocker-composefile to merge (reuse) volumes definitions, as follows:x-defaults: &my-defaults: volumes: - /first:/volume - /second:/volume services: my-service1: <<: *my-defaults volumes: - /additional:/volume my-service2: <<: *my-defaults volumes: - /custom:/volResult: only the/additional:/volumeis mapped.Question: how can I achieve a real merge here?
How to inherit/merge volumes from yaml anchors in docker-compose?
TL;DRi forgot to change/add the actuator spring profileLong VersionThe problem was inside my Dockerfile:FROM openjdk:8-jdk-alpine ADD target/app.jar /jar/ VOLUME /tmp EXPOSE 8080 ENV SPRINGPROFILES=prod CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "-Dserver.port=8080", "-Dserver.address=0.0.0.0", "/jar/app.jar", "--spring.profiles.active=${SPRINGPROFILES}"]i forgot to pass the SPRINGPROFILE variable (= prod,actuator)it did not recognize the variableafter i changed the dockerfile toFROM openjdk:8-jdk-alpine ADD target/app.jar /jar/ VOLUME /tmp EXPOSE 8080 ENV SPRINGPROFILES=prod ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/./urandom","-Dspring.profiles.active=${SPRINGPROFILES}","-jar", "-Dserver.port=8080", "-Dserver.address=0.0.0.0", "/jar/app.jar"]and adding the env variable to my docker-compose-file it workedenvironment: - "SPRINGPROFILES=prod,actuator"
I have got a Spring Boot Microservice with custom Spring Boot Acturators. When i run the Jar directly i can access all of my Acturators, when i run the same Jar inside a Docker-Image i get a 404 Error.SecurityConfig:@Configuration public class ActuatorSecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .requestMatcher(EndpointRequest.toAnyEndpoint()) .anonymous() //.authorizeRequests() //.anyRequest().authenticated() .and() .httpBasic(); } }Application.yaml:spring: profiles: actuator management.endpoints: web.exposure.include: "*" health.show-details: alwaysThis is like the "Boilerplate-Code" of my Acturators:@Component @RestControllerEndpoint(id = "acturatorName") public class acturatorNameActurator { @GetMapping(value = "/foo", produces = "application/json") public String bar(){ return "{\"status\":\"started\"}"; } ... }None of my Custom Acturators that work when i just run the Jar run Inside docker?/actuator/infoDoes work for example but/actuator/metricsdoesnt. What can i do to fix this? Ty in advancedEditIs maybe my SecurityConfiguration wrong? Does Spring maybe block the Request because the Container is in another (Docker) Network? But then i would get something different then 404 right?Spring is Bind on IP0.0.0.0Port 8080, i can access my REST-Endpoints normally
Use Spring Boot Actuator in Docker?
You can find in the Docker Hub various 32 bits docker images, for examplehttps://hub.docker.com/r/32bit/ubuntu/You can learn how to do it yourself withhttp://mwhiteley.com/linux-containers/2013/08/31/docker-on-i386.htmlAbout the hashes, you should readhttps://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/
I am looking a lightweight docker image with Ubuntu 32 bit. Where can I find it?When I put a commanddocker build .in my directory with Dockerfile I get output like:ending build context to Docker daemon 10.21 MB Step 1 : FROM ubuntu ---> 104bec311bcd Step 2 : ENV x /home/ ---> Using cache ---> e38b022c91f8 Step 3 : COPY ./x /home ---> Using cache ---> c4558c94236fWhat does those "hashes" means? I mean for examplec4558c94236f. And what about when it comes to cache?
Docker image of 32 bit Ubuntu
you don't expose a port. 9042 is docker port. When you run the docker image you must remember this:docker run -p 9042:9042 image-namefirst 9042 define the port number where outer world will connect and 2nd 9042 defines docker's port number which will be bound with outer port number 9042.
Hi I am new to word Docker and Cassandar. I have a problem connecting to Cassandra in Docker from my computer.I run container Cassandra and I see that exposed ips and ports are 192.168.99.100:9042.(first image) In docker I can even see that "Test cluster" is running but when I want to connect to Cassandra by NoSQL Manager for Cassandra there is always error message "None of the hosts tried for query are available".Thank you.
Cassandra in Docker cant connect from outside
I don't know what a wheel or a docker is, but your error comes from a mismatch between the Python used to build the module and the one that is trying to run it.
I am trying to install some python requirements from a local package directory containingwheelarchives. I am installing the requirements inside a Docker container.The steps I'm following are:$ pip install wheel # wheel runs, outputs .whl files to wheelhouse directory $ pip wheel --wheel-dir wheelhouse -r requirements.txtThen, inside myDockerfile:ADD requirements.txt /tmp/requirements.txt ADD wheelhouse /tmp/wheelhouse # install requirements. Leave file in /tmp for now - may be useful. RUN pip install --use-wheel --no-index --find-link /tmp/wheelhouse/ -r /tmp/requirements.txtThis works - and all the requirements are installed correctly:# 'app' is the name of my built docker image $ docker run app pip list ... psycopg2 (2.5.1) ...However, if I actually try running something inside the container thatusespsycopg2, then I get the following:Error loading psycopg2 module: /usr/local/lib/python2.7/site-packages/psycopg2/_psycopg.so: undefined symbol: PyUnicodeUCS4_AsUTF8StringI presume that this is something to do with the way in which the wheels were built - I ranpip wheelon the container host machine (Ubuntu 12.04).How can I fix this - using wheels significantly reduces the time taken to build the container image, so I don't want to revert to installing packages if I can help it?
Error loading psycopg2 module when installed from a 'wheel'
Effectively there were a change in$HOME, and the changes were merged after Docker releasev1.0. I have built the Dockerfile you provided and it shows me$HOME=/root(I use Dockerv1.5.0). CheckDocker Issue #2968and related commits for additional details.
I have found a strange difference between building docker images in my Ubuntu 14.04 host machine and the Docker Hub automated builds.This is my Dockerfile:FROM buildpack-deps:wheezy-scm RUN echo $HOMEThis is the output in my machine:---> 2afbec25f6f6 Step 1 : RUN echo $HOME ---> Running in 6074455e13c0 / ---> 0cb1b6141f93 Removing intermediate container 6074455e13c0 Successfully built 0cb1b6141f93And this one comes from Docker hub:---> 2afbec25f6f6 Step 1 : RUN echo $HOME /root ---> 4c781d2d7d72 Successfully built 4c781d2d7d72Note the differentHOMEdirectories:/rootinstead of/. Can anyone explain me what is happening?This is my Docker version (I have installed the standarddocker.ioUbuntu package):$ docker version Client version: 1.0.1 Client API version: 1.12 Go version (client): go1.2.1 Git commit (client): 990021a Server version: 1.0.1 Server API version: 1.12 Go version (server): go1.2.1 Git commit (server): 990021a
Different home directory during Docker build in Docker hub
Based on your comment:the data persisted, but I still can't find the persist data in my host ./data directoryand running this command:docker run -i -t -v="data:/data" -p 5432:5432 my_image/postgresql:9.3You appear to be confusing a named volume and a host volume. The named volume is used when you give the volume a name without a path, likedata. The named volume stores the data using the docker driver (typically local) under a given name that you can reuse. It has the advantage of being listed indocker volume ls, and being initialized to the content of the image at the mounted location.If you include a full path, like/home/username/datathat would mount the directory from the docker host instead of using the named volume. The biggest disadvantage is that you don't get the directory initialized with the contents from the image, and you will likely encounter permission issues where the uid of the container process won't match the uid you use on your host.For more details, seehttps://docs.docker.com/engine/tutorials/dockervolumes/
Here is my docker file:FROM ubuntu:14.04 RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8 RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list RUN apt-get update && apt-get -y -q install python-software-properties software-properties-common \ && apt-get -y -q install postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3 USER postgres RUN /etc/init.d/postgresql start \ && psql --command "CREATE USER pguser WITH SUPERUSER PASSWORD 'pguser';" \ && createdb -O pguser pgdb USER root RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf EXPOSE 5432 RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"] USER postgres CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]Here is what I did...I build the docker image:docker build --rm=true -t my_image/postgresql:9.3Then, I create a new directory calleddatain my current directory and ran the following command:docker run -i -t -v="data:/data" -p 5432:5432 my_image/postgresql:9.3I open another terminal and enter the postgres shell by running:psql -h my_docker_ip -p 5432 -U pguser -W pgdband I create a table:pgdb=# create table test (test_id bigserial primary key);I verify the table exist using\dtand exit the postgres shellI terminate the docker process and rerun the following:docker run -i -t -v="data:/data" -p 5432:5432 my_image/postgresql:9.3I enter the posgrest shell again and run\dtI noticethere are no tables.in thedatadirectory there are no files.I must be doing something wrong since I am assuming that the table I created will persist. Can someone point out my mistake?
Docker volume does not persist data
Just starting the server is not enough. When the CMD exits, so does the container. Thus, if you start a service that's a daemon, you need to keep your process alive. This can be achieved by, for example, tailing the service log file. supervisord is another way to run processes and keep the CMD alive.For example, you might doCMD /test.py && tail -F /var/log/zookeeper.logRunning from the commandline you could do something similardocker run -t -i -v /root/test.py:/test.py zookeeper bash -c "python test.py && tail -F /var/log/zookeeper.log"
docker container exited immediately after python script execution:docker run -t -i -v /root/test.py:/test.py zookeeper python test.py (test.py starts zookeeper service )The command is successful but exits immediately with out starting container. I could NOT start the container with "docker start container id".Manually running "python test.py" is successful inside container but not during "docker run ...."
docker container exited immediately after python script execution
I was able to get it working the way I had hoped. Though I am still working out some details in the provisioning script, this is how I ended up getting the result I wanted from the docker side of things.FROM microsoft/aspnet:3.5-windowsservercore-10.0.14393.1198 #The shell line needs to be removed and any RUN commands need to be immediately proceeded by 'powershell' eg RUN powershell ping google.com #SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop’; $ProgressPreference = ‘SilentlyContinue’;”] COPY *.ps1 /Container/ COPY [“wwwroot”, “/inetpub/wwwroot”] COPY [“Admin”, “/Program Files (x86)Admin”] COPY [“Admin/web.config”, “/Program Files (x86)/Admin/web_default.config”] ENV myParm1 Hiiii ENV myParm2 123 ENTRYPOINT ["powershell", "-NoProfile", "-Command", "C:\\Container\\Start-Admin.Docker.Cmd.ps1"] CMD ["-parm1 $Env:myParm1 -parm2 $Env:myParm2"]The docker run command looks like thisdocker run -d -p 8001:80 -e "myParm1=byeeeee" --name=container image:v1I hope this helps someone else that is in my boat. Thanks for all the answers!
For the life of me, I cannot seem to get my provisioning script to execute when I run my container. Down the road, I will need to pass in arguments to the docker run command to replace 'hiiii' and '123' for multiple container deployments.This is my docker fileFROM microsoft/aspnet:3.5-windowsservercore-10.0.14393.1198 SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop’; $ProgressPreference = ‘SilentlyContinue’;”] COPY *.ps1 /Container/ COPY [“wwwroot”, “/inetpub/wwwroot”] COPY [“Admin”, “/Program Files (x86)Admin”] COPY [“Admin/web.config”, “/Program Files (x86)/Admin/web_default.config”] #ENTRYPOINT [“powershell”] CMD [“powershell.exe”, -NoProfile, -File, C:\Container\Start-Admin.Docker.Cmd.ps1 -Parm1 ‘Hiiii’ -parm2 ‘123’]I have also tried the shell version of CMD as followsCMD powershell -NoProfile -File C:\Container\Start-Admin.Docker.Cmd.ps1 -Parm1 ‘Hiiii’ -Parm2 ‘123’This is the command I am using.docker image build -t image:v1:v1 . docker run --name container -p 8001:80 -d image:v1After I create and run the container I see that the script did not run, or failed. However, I can exec into powershell on the container and run the script manually and it works fine and I see all the changes that I need.docker exec --interactive --tty container powershell C:\Container\Start-Admin.Docker.Cmd.ps1 -Parm1 ‘Hiiii’ -Parm2 ‘123’I am just at a loss as to what I am missing regarding CMD.Thanks!
Windows - Docker CMD does not execute
docker run --name "ContainerID" ..... Seeherefor more details
I have the following scenario:Continuous Delivery with DockerBuild Image from GitHub repositoryStop the current running containerStart the current running container with the newer imageFor stopping the container, I need the container ID. Can I assign a reusable tag to a container once I start it, so I know which container to stop, once a newer image is ready for deployment?
Tag a docker container?
Switching to aufs storage seemed to resolve it.I used a base image box fromphusionwhich seems to optimize for docker.
Every time I try building with docker or fig an image (doesn't matter which one), I randomly will getCannot start container : Error getting container from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-262151-' on '/var/lib/docker/devicemapper/mnt/': no such file or directoryWeird thing is, if I re-run it, it usually won't have the same error.Note that I am running docker inside vagrant (ubuntu-trusty-64)
Docker building fails randomly with Error mounting
-XX:MaxRAMoption affects nothing but thedefault heap size.Memory used by a Java process (from the OS perspective) includes not only Java heap, but also many other things. Seethis answerfor details.
This question already has answers here:Setting -XX:MaxRam(3 answers)Closed3 years ago.I run a java application with the following parameters:#!/bin/bash export JVM_OPTS="-XX:MaxRAM=150m" export JVM_OPTS="$JVM_OPTS -XX:+UseSerialGC" java $JVM_OPTS -jar application.jarThehtopshows:VIRT=475MRES=238MSHR=4880MEM%=24.1As I understand it, I need to look at theRESparameter. But in this case, it greatly exceeds-XX:MaxRAM. Expected that in this case,OutOfMemoryExceptionwill happen. What am I doing wrong? How to limit the memory of a java application for a container? Am I incorrectly looking at the used process memory?I want to minimize the used RAM. OS - ​​CentOS 7
How JVM -XX:MaxRAM option can be correctly used? [duplicate]
You're getting a "no such file or directory" error, and it looks like that's the truth.The Dockerfile sets:ARG DEPENDENCY=target/dependencyAnd then attempts aCOPYoperation:COPY ${DEPENDENCY}/BOOT-INF/lib /app/libIf you resolve${DEPENDENCY}, thatCOPYcommand look like:COPY target/dependency/BOOT-INF/lib /app/libAnd there is notargetdirectory in the repository. Maybe this is something you're supposed to create by following the tutorial? From that document:This Dockerfile has a DEPENDENCY parameter pointing to a directory where we have unpacked the fat jar. If we get that right, it already contains a BOOT-INF/lib directory with the dependency jars in it, and a BOOT-INF/classes directory with the application classes in it. Notice that we are using the application’s own main class hello.Application (this is faster than using the indirection provided by the fat jar launcher).
Building docker image fails on copy task. No such file or directory. I am using the hello worldexample from springBuilding from openjdk:8-jdk-alpineRun echo ${PWD}prints /Run lsprints a set of normal directories (/usr /var etc) but no project files are presentWhy is docker not using the WORKING directory?FROM openjdk:8-jdk-alpine VOLUME /tmp ARG DEPENDENCY=target/dependency COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib COPY ${DEPENDENCY}/META-INF /app/META-INF COPY ${DEPENDENCY}/BOOT-INF/classes /app ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]Files to copy are prepared by gradle and i can confirm that they are present:task unpack(type: Copy) { dependsOn bootJar from(zipTree(tasks.bootJar.outputs.files.singleFile)) into("build/dependency") }I am runningdocker build .docker gradle taskdocker { name "${project.group}/${bootJar.baseName}" copySpec.from(tasks.unpack.outputs).into("dependency") buildArgs(['DEPENDENCY': "dependency"]) }
Docker COPY no such file or directory
This is because you are telling the container to run the apk update command when it starts, this completes and exits with a valid exit code 0...To get it to run that apk update command and still use the php container, you need to extend the php image with your own build to create your own 'custom image' of the base php image (kinda like extending a PHP class), and then run that apk update as part of the dockerfile.This is reasonably easy to do and your dockerfile would look something like:FROM php:7.3-fpm-alpine3.9 RUN apk --update add php7-mysqliYou can save this as./php/DockerfileThen update yourdocker-compose.ymlfile to say:... php: build: ./php volumes: ...Removing the command: sectionThis would then, upondocker-compose up, build your extended image with the apk update inside it as an extra layer on the container, and continue running the standard php command that the original image provides.Here is the documentation on thebuild:directive, as there are quite a few other cool things you can do with it, like specifying the Dockerfile if you don't want to put it into a subdirectory, and providing thecontext:should you wish to bake in files to your new imagehttps://docs.docker.com/compose/compose-file/#build
I'm relatively new to docker and docker-compose so I made this fileversion: '3' services: web: image: nginx:latest ports: - "8050:80" volumes: - ./code:/code - ./site.conf:/etc/nginx/conf.d/default.conf links: - php php: image: php:7.3-fpm-alpine3.9 command: apk --update add php7-mysqli volumes: - ./code:/code db: image: mysql command: --default-authentication-plugin=mysql_native_password restart: always ports: - 3306:3306 environment: MYSQL_ROOT_PASSWORD: example adminer: image: adminer restart: always ports: - 8080:8080For some reason the linecommand: apk --update add php7-mysqliStops php container for no reason, just printsdock_php_1 exited with code 0Thus my web container also stops and service doesn't workWhat could be core of the problem and how to fix it?
PHP Docker container exits for no reason
Docker provides an isolation layer, and one of the majorgoalsof Docker is to hide details of the host's hardware from containers. The easiest, most appropriate way to query low-level details of the host's hardware is from a root shell on the host, ignoring Docker entirely.The actual mechanism of this is by restricting Linuxcapabilities.capabilities(7) documents that you needCAP_SYS_RAWIOto access/dev/mem, so in principle you can launch your container with--cap-add SYS_RAWIO. You might need other capabilities and/or device access to make this actually work, because Docker is hiding the details of what you're trying to access as a design goal.
I am trying to run command dmidecode in my docker container,docker run --device /dev/mem:/dev/mem -it jin/ubu1604However, it claims that there is no permissionroot@bd1062dfd8ab:/# dmidecode # dmidecode 3.0 Scanning /dev/mem for entry point. /dev/mem: Operation not permitted root@bd1062dfd8ab:/# ls -l /dev total 0 crw--w---- 1 root tty 136, 0 Jan 7 03:21 console lrwxrwxrwx 1 root root 11 Jan 7 03:20 core -> /proc/kcore lrwxrwxrwx 1 root root 13 Jan 7 03:20 fd -> /proc/self/fd crw-rw-rw- 1 root root 1, 7 Jan 7 03:20 full crw-r----- 1 root kmem 1, 1 Jan 7 03:20 mem drwxrwxrwt 2 root root 40 Jan 7 03:20 mqueue crw-rw-rw- 1 root root 1, 3 Jan 7 03:20 null lrwxrwxrwx 1 root root 8 Jan 7 03:20 ptmx -> pts/ptmx drwxr-xr-x 2 root root 0 Jan 7 03:20 pts crw-rw-rw- 1 root root 1, 8 Jan 7 03:20 random drwxrwxrwt 2 root root 40 Jan 7 03:20 shm lrwxrwxrwx 1 root root 15 Jan 7 03:20 stderr -> /proc/self/fd/2 lrwxrwxrwx 1 root root 15 Jan 7 03:20 stdin -> /proc/self/fd/0 lrwxrwxrwx 1 root root 15 Jan 7 03:20 stdout -> /proc/self/fd/1 crw-rw-rw- 1 root root 5, 0 Jan 7 03:20 tty crw-rw-rw- 1 root root 1, 9 Jan 7 03:20 urandom crw-rw-rw- 1 root root 1, 5 Jan 7 03:20 zeroThis confused me. Since I was able to rundmidecode -t systemon the host (ubuntu 14.04) fine.I even followed some advice and set the permission on dmidecode executablesetcap cap_sys_rawio+ep /usr/sbin/dmidecodeIt still doesn't work.Any ideas?UPDATEBased on David Maze's answer, the command should berun --device /dev/mem:/dev/mem --cap-add SYS_RAWIO -it my/ubu1604aDo this only when you are going to trust what runs in container. For example, if you are test installation procedure on a pristine OS.
Can't run dmidecode on docker container
You need:the nvidia drivera recent version of docker-ce (19.03 or newer)and thenvidia container toolkitalso called "nvidia docker"You generally do not need CUDA (toolkit) or CUDNN installed on the base machine. Those can be in the container for use in the container.Seeherefor specific install instructions.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed3 years ago.Improve this questionI am setting up my environment for machine learning development and I thought of using Docker.Does Nvidia CUDA and/or CUdnn need to be installed on my machine or does it work only by existing in the docker container?Thanks in advance for your answers!
Does NVIDIA Docker need CUDA installed? [closed]
This is a classic docker issue. The process you start must execute in the foreground, otherwise the container simply stops.So, to be able to do so the following can be used in your startservice script:#!/usr/bin/sh service httpd start # Tail the log file tail -f /var/log/httpd/access_log # Alternatively, you can tail any file or even /dev/null #tail -f /dev/nullNote that there are also other ways of fixing this. One way is to usesupervisordthat keeps your processes alive. The supervisord-approach is cleaner and leshackishthan thetail -f-approach and I would personally prefer that alternative.Another alternative is simply that you donot start httpd as a servicebut instead provide the-DFOREGROUNDparameter. This will make httpd be attached to the shell (and not fork off to a background process)./usr/sbin/httpd -DFOREGROUNDFor more info on http in foreground mode, check thisquestion.
Ok, I have exhausted pretty much all threads and articles, but still cant get my apache webserver to run in standalone mode on Centos Docker Container.Here is my simplified Dockerfile# install apache RUN yum -y install httpd # start the webserver ADD startservice /startservice RUN chmod 775 /startservice EXPOSE 80 CMD ["/startservice"]My starservice script just has#!/usr/bin/sh service httpd startI can build fine, but, cant seem to run the container in daemon/standalone mode. How do I do that?I am using this to run the container in standalone modedocker run -p 80:80 -d -t webserverI have to log onto the container and start the service for the webserver to run.docker run -p 80:80 -i -t webserver bash service httpd start
running apache in docker
Ok, so I'm using a mac as a development machine should have mentioned that, for people struggling with the same problem. mac's docker host IP is always changing.Changing my .env postgres url to:postgresql://username:[email protected]:5432/dbnameseems to do the trick.
I'm trying to connect to postgres running locally through a python script that's running in a container. I'm setting it up like this because in the future once I deploy I'll be using a managed database service that spins up postgres.Ideally I'm able to define a postgres url in the.env, docker-compose uses the env to set the environment variables and the application reads the postgres url.Is this the best approach or if I should have a postgres container.Best approach to achieve the flow I just described.Tried passing POSTGRES_URL in the .env using localhost, public ip, local ip. e.g: postgresql://username:password@localhost:5432/dbnameversion: "3.7" services: app: build: context: ./ dockerfile: Dockerfile volumes: - "./:/usr/src/app" environment: - ENV=${ENV} - LOGS=${LOGS} - AWS_REGION=${AWS_REGION} - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} - POSTGRES_URL=${POSTGRES_URL}Expected: Connection to the local database successful.Actual: Is the server running on host <"ip"> and accepting TCP/IP connections on port 5432?
Can't connect python app container to local postgres
Well, You have to create a Dockerfile and build an image off of it. There are best practices regarding the docker image creation that you need to apply. There are also language specific best practices.Just to give you some ideas about the process:FROM python:3.7.1-alpine3.8 #base image ADD . /myapp # add project files WORKDIR /myapp RUN apk add dep1 dep2 #put your dependency packages here RUN pip-3.7 install -r requirements.txt #install pip packages RUN pip-3.7 install . CMD myapp -hNow build image and push it to some public registry:sudo docker build -t /myapp:0.1 .users can just pull image and use it:sudo docker run -it myapp:0.1 myapp.py
I have created a Python command line application that is available through PyPi /pip install.The application has native dependencies.To make the installation less painful for Windows users I would like to create a Dockerised version out of this command line application.What are the steps to convertsetup.pywith an entry point and requirements.txt to a command line application easily? Are there any tooling around this, or should I just writeDockerfileby hand?
Containerising Python command line application
You Angular frontend is making requests to the Spring backend from outside the frontend container. It's making request from inside your browser. That's why the backend also needs to be exposed.Second, you don't needlinks. The linking will be done automatically since both services are in the same network.Here is a updated config, that uses networks instead:version: "3" services: ### DATABASE ### db: image: postgres:latest environment: - POSTGRES_PASSWORD=envpass - POSTGRES_USER=envuser - POSTGRES_DB=database # Only add the ports here, if you want to access the database using an external client. # ports: # - "5433:5432" networks: - backend ### BACKEND ### backend: image: angularback ports: - "8082:8080" depends_on: - db networks: - backend - frontend ### FRONTEND ### frontend: image: angularfront ports: - "8084:80" depends_on: - backend networks: - frontend networks: backend: frontend:When not running in production, I'd also recommend to bind the ports all directly to the host interface (127.0.0.1), to prevent others in your network to access the port on your machine, like this:ports: - "127.0.0.1:8084:80"
this is just a question about theory, my app is running perfectly...So, I have 3 services runing with docker-compose: apostgres database, abackend springbootand afrontend angular.From what I know, docker containers can acess ports from other docker containers without the need to expose the port, so there is no need to expose nor bind the ports because they are all containers and can access each other with the default bridge network mode(that's what I learned, don't know if this is right).I only need to expose the port from the frontend container so I can access from my localhost.The thing is: I can access the database with the backend(backend -> database)without the need of exporting any ports, but with the frontend(frontend -> backend)using angular with nginx, it only works with the backend port exposed, why?docker-compose.yml:version: "3" services: ### DATABASE ### db: image: postgres:latest container_name: mydb network_mode: bridge environment: - POSTGRES_PASSWORD=envpass - POSTGRES_USER=envuser - POSTGRES_DB=database # It works without exposing # expose: # - 5432 # ports: # - 5433:5432 ### BACKEND ### backend: image: angularback container_name: backend network_mode: bridge expose: - 8080 ports: - 8082:8080 depends_on: - db links: - db ### FRONTEND ### frontend: image: angularfront container_name: frontend network_mode: bridge expose: - 80 ports: - 8084:80 depends_on: - backend links: - backend
Why a angular container needs a exposed port to connect?
The output fromlddsuggests thatchromedriveris built againstglibc(GNU standard C library), which isn't compatible to vanilla Alpine, usingmusl libc.To fix this, try installing the Alpine compatible version ofchromedriver, available in Alpine repositories, usingapk add chromium-chromedriver:https://pkgs.alpinelinux.org/package/v3.9/community/x86_64/chromium-chromedriver
Trying to run selenium python on a Docker Apline Linux and getting the "Message: 'chromedriver' executable needs to be in PATH" error because it thinks the file doesn't exist. But tried everything I can fine in other answers, but it still won't launch.Here's what I tried so far:Added it to folder to PATH and PYTHONPATH.Tried specifying path to chromedriver when I get the driverTried specifying path to chromium when I get the driverMade sure chromium-browser launches with similar flagschmod +x on chromedriverchmod 777 on chromedriverSee error.Update: Adding these packages in Docker file.RUN apk --update --no-cache add\ alpine-sdk\ autoconf\ automake\ bash\ binutils-gold\ build-base\ curl\ dumb-init\ g++\ gcc\ gcompat\ git\ gnupg\ gzip\ jpeg\ jpeg-dev\ libc6-compat\ libffi\ libffi-dev\ libpng\ libpng-dev\ libstdc++\ libtool\ linux-headers\ make\ mysql\ mysql-client\ mysql-dev\ mesa-gles\ nasm\ nodejs\ nss\ openjdk8-jre\ openssh-client\ paxctl\ python3\ python3-dev\ sudo\ tar\ unzip\ wget\ chromiumAnd the shell script I'm getting Chromedriver with#!/bin/bash LATEST_CHROMEDRIVER=$(curl https://chromedriver.storage.googleapis.com/LATEST_RELEASE) curl -L https://chromedriver.storage.googleapis.com/$LATEST_CHROMEDRIVER/chromedriver_linux64.zip >> chromedriver.zip mv -f chromedriver.zip /usr/local/bin/chromedriver.zip unzip /usr/local/bin/chromedriver.zip -d /usr/local/bin chmod a+x /usr/local/bin/chromedriver sudo ln -s /usr/local/bin/chromedriver /usr/bin/chromedriver rm /usr/local/bin/chromedriver.zip
Selenium Python: No such file or directory: '/usr/local/bin/chromedriver' but it exists and is added to path
Docker toolbox would be using VirtualBox.The answer you are referring to is likely to useDocker for WindowswithHyper-V: see "Install Docker for Windows"Docker for Windows requires Microsoft Hyper-V to run. After Hyper-V is enabled, VirtualBox will no longer work, but any VirtualBox VM images will remain.VirtualBox VMs created with docker-machine (including the default one typically created during Toolbox install) will no longer start. These VMs cannot be used side-by-side with Docker for Windows. However, you can still use docker-machine to manage remote VMs.
I am looking to use files on my Windows computer in a Docker container. Thisis explained here.My question relates to how to get to the Docker settings dialogue.I am using Docker Toolbox on Windows 10. When I right-click on the Docker icon from the task bar, I get three options:Docker Quick Start Terminal;Unpin from taskbar; andclose the window.I am not getting settings dialogue box. How can I see that option?
Mounting Windows drives to access from Docker
You can download java tar.gz, unpack it and set environment variable. Below a sample of implementation in Dockerfile:FROM python:3.6-slim RUN apt-get update RUN apt-get install -y apt-utils build-essential gcc ENV JAVA_FOLDER java-se-8u41-ri ENV JVM_ROOT /usr/lib/jvm ENV JAVA_PKG_NAME openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz ENV JAVA_TAR_GZ_URL https://download.java.net/openjdk/jdk8u41/ri/$JAVA_PKG_NAME RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/* && \ apt-get clean && \ apt-get autoremove && \ echo Downloading $JAVA_TAR_GZ_URL && \ wget -q $JAVA_TAR_GZ_URL && \ tar -xvf $JAVA_PKG_NAME && \ rm $JAVA_PKG_NAME && \ mkdir -p /usr/lib/jvm && \ mv ./$JAVA_FOLDER $JVM_ROOT && \ update-alternatives --install /usr/bin/java java $JVM_ROOT/$JAVA_FOLDER/bin/java 1 && \ update-alternatives --install /usr/bin/javac javac $JVM_ROOT/$JAVA_FOLDER/bin/javac 1 && \ java -version
someone could help me, i'm starting from follow docker fileFROM python:3.6-slim RUN apt-get update RUN apt-get install -y apt-utils build-essential gccAnd i would add an openjdk 8thanks
Dockerfile from python:3.6-slim add jdk8
As it turns out, gitlab-runner was working as expected. What is quite confusing though is that it does some manipulations to the image it's booting up. The entrypoint is overridden and the folder the repository is checked out into is mounted into the container with the WORKDIR pointing to it.So while it is possible to run your own images as containers, you have to keep in mind that you might need to change the folder before running any commands.
I created a minimal gitlab CI script to verify this error:docker_execution_test: image: debian:9 script: - pwd - lsThe output I would expect is this:db@theia:~/git/docker_test (master*)$ docker run -it --rm debian:9 pwd / db@theia:~/git/docker_test (master*)$ docker run -it --rm debian:9 ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usrHowever, the output when executed through gitlab-runner is this:db@theia:~/git/docker_test (master)$ gitlab-runner exec docker docker_execution_test Runtime platform arch=amd64 os=darwin pid=49585 revision=3afdaba6 version=11.5.0 WARNING: You most probably have uncommitted changes. WARNING: These changes will not be tested. Running with gitlab-runner 11.5.0 (3afdaba6) Using Docker executor with image debian:9 ... Pulling docker image debian:9 ... Using docker image sha256:4879790bd60d439cfe39c063660eef7af525d5f6f1cbb701a14c7cfc11cbfcf7 for debian:9 ... Running on runner--project-0-concurrent-0 via theia.local... Cloning repository... Cloning into '/builds/project-0'... done. Checking out bb973ec4 as master... Skipping Git submodules setup $ pwd /builds/project-0 $ ls README.md Job succeededWhat the job is listing is the content of the special gitlab container that's used throughout the build. Why is the container not created? What am I missing here?
Gitlab runner not executing jobs docker image
Added a proxy option and now it works @ localhost:3000const gBrowsersync = function(done) { browsersync.init({ open: false, proxy: "localhost" }); done(); };Not sure why I had to add localhost as a proxy though. If anyone can provide a brief explanation, I would appreciate it.
I'm trying to run browsersync in my docker container but I only get the directory listing when I navigate to localhost:3000. I'm trying to run a WordPress instance, and I'm using Gulp as the task runner. localhost:3001 brings up the browsersync ui successfully and viewing localhost (no port) brings up the homepage. Here are the relevant code snippets I think.Gulpfile BrowserSync settings:const gBrowsersync = function(done) { browsersync.init({ open: false, }); done(); };Docker-compose:version: "3.7" services: db: image: mysql:5.7 container_name: db restart: always environment: MYSQL_ROOT_PASSWORD: password volumes: - db_data:/var/lib/mysql networks: - back wordpress: build: . image: ws-wordpress container_name: wp depends_on: - db restart: always ports: - "80:80" - "3000:3000" - "3001:3001" environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_PASSWORD: password volumes: - ./wp-content/:/var/www/html/wp-content/ - ./sw.js:/var/www/html/sw.js - ./manifest.json:/var/www/html/manifest.json - ./package.json:/var/www/html/package.json - ./gulpfile.babel.js:/var/www/html/gulpfile.babel.js - ./webpack.config.js:/var/www/html/webpack.config.js networks: - back networks: back: volumes: db_data:Dockerfile:FROM wordpress RUN apt-get update -y RUN apt-get install gnupg -y RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - RUN apt-get install nodejs -y RUN apt-get install nano -y
Docker and BrowserSync
... a python script for ...Just run it; don't package it into a Docker container. That's doubly true if its inputs and outputs are both local files, and it expects to do its thing and exit promptly: the filesystem isolation Docker provides works against you here.This is, of course, technically possible. Depending on how exactly the support program container is set up, the "command" at the end ofdocker runwill be visible to the Python script insys.argv, like any other command-line options. You can use adocker run -voption to publish parts of the host's filesystem into the container. So you might be able to run something likedocker run --rm -v $PWD/files:/data \ converter_image \ python convert.py /data/in.txt /data/out.pklwhere all of the/datapaths are in the container's private filesystem space.There are two big caveats:The host paths in thedocker run -voption are pathsspecifically on the physical host. If your HTTP service is also running in a container you need to know somehost-systempath you can write to that's also visible in your container filesystem.Running anydockercommand at all effectively requires root privileges. If any of the filenames or paths involved are dynamic,shell injection attacks can compromise your system. Be very careful with how you run this from a network-accessible script.
Suppose the following setup:Website written in php / laravelUser uploads a file (either text / doc / pdf)We have a docker container which contains a python script for converting text into a numpy array.I want to take this uploaded data and pass it to the python script.I can't find anything which explains how to pass dynamically generated inputs into a container.Can this be done by executing a shell script from inside the laravel app which contains the uploaded file as a variable specified in the dockerfile's ENTRYPOINT?Are there any other ways of doing this?
Pass argument to python script running in a docker container
Docker's standardnetworking configurationpicks a container subnet for you out of itschosen defaults. As long as it doesn't conflict with any interfaces on your host, Docker is okay with it.Then, Docker inserts an iptables MASQUERADE rule that allows containers to talk to the external world using the host's default interface.Kubernetes' 3 requirements are violated by the fact that subnets are chosen only based on addresses in use on the host, which forces the requirement to NAT all container traffic using the MASQUERADE rule.Consider the following 3-host Docker setup (a little contrived to highlight things):Host 1:eth0: 10.1.2.3docker0: 172.17.42.1/16container-A: 172.17.42.2Host 2:eth0: 10.1.2.4docker0: 172.17.42.1/16container-B: 172.17.42.2Host 3:eth0: 172.17.42.2docker0: 172.18.42.1Let's saycontainer-Bwants to access an HTTP service on port 80 ofcontainer-A. You can get docker to exposecontainer-A's port 80 somewhere onHost 1. Thencontainer-Bmight make a request to 10.1.2.3:43210. This will be received oncontainer-A's port 80, but will look like it came from some random port on 10.1.2.4 because of the NAT on the way out ofHost 2. This violates theall containers communicate without NATand thecontainer sees same IP as othersrequirements. Try to accesscontainer-A's service directly fromHost 2and you get yournodes can communicate with containers without NATviolation.Now if either of those containers want to talk toHost 3, they're SOL (just a general argument for being careful with the auto-assigned docker0 subnets).Kubernetes approach on GCE/AWS/Flannel/... is toassigneach host VM a subnet carved out of a flat private network. No subnets overlap with VM addresses or with each other. This lets containers and VMs communicate NATlessly.
I'm reading theKubernetes "Getting Started from Scratch" Guideand have arrived at the dreadedNetwork Section, where they state:Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): * all containers can communicate with all other containers without NAT * all nodes can communicate with all containers (and vice-versa) without NAT * the IP that a container sees itself as is the same IP that others see it asMy first source of confusion is:How is thisdifferentthan the "standard" Docker model?How is Docker different w.r.t. those 3 Kubernetes requirements?The article then goes on to summarize how GCE achieves these requirements:For the Google Compute Engine cluster configuration scripts, we use advanced routing to assign each VM a subnet (default is /24 - 254 IPs). Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric. This is in addition to the "main" IP address assigned to the VM, which is NAT'ed for outbound internet access. A linux bridge (called cbr0) is configured to exist on that subnet, and is passed to docker's --bridge flag.My question here is:Which requirement(s) from the 3 above does this paragraph address? More importantly,howdoes it achieve the requirement(s)?I guess I just don't understand how 1-subnet-per-VM accomplishes: container-container communication, node-container communication, and static IP.And, as a bonus/stretch concern: Why doesn't Marathon suffer from the same networking concerns as what Kubernetes is addressing here?
Setting up the network for Kubernetes
Hostname is not used by docker's built in DNS service. It's a counterintuitive exception, but since hostnames can change outside of docker's control, it makes some sense. Docker's DNS will resolve:the container idcontainer nameany network aliases you define for the container on that networkThe easiest of these options is the last one which is automatically configured when running containers with a compose file. The service name itself is a network alias. This lets you scale and perform rolling updates without reconfiguring other containers.You need to be on a user created network, not something like the default bridge which has DNS disabled. This is done by default when running containers with a compose file.Avoid using links since they are deprecated. And I'd only recommend adding host entries for external static hosts that are not in any DNS, for container to container, or access to other hosts outside of docker, DNS is preferred.
I've had db and server container, both running in the same network. Can ping db host by its container id with no problem. When I set a hostname for db container manually (-h myname), it had an effect ($ hostnamereturns set host), but I can't ping that hostname from another container in the same network. Container id still pingable. Although it works with no problem in docker compose. What am I missing?
Can't resolve set hostname from another docker container in same network
EXPOSEis there to allow inter-containers communication (within the same docker daemon), with thedocker run --linkoption.Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need--publish.See also "Difference between “expose” and “publish” in docker".See also an example with "Advanced Usecase with Docker: Connecting Containers"Make sure though that the ip is the right one ($(docker-machine ip default)).If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM withVirtualBox), make sure the mapped ports 7474 and 8000 areport forwarded from the host to the VM.VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474" VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"In the OP's case, this is using neo4j: see "Neo4j with Docker", based on theneo4j/neo4j/image andits Dockerfile:ENTRYPOINT ["/docker-entrypoint.sh"] CMD ["neo4j"]It is not meant to be used for installinganotherservice (like nodejs), where theCMD cd linkurious.js && npm startwould completely override theneo4jbase imageCMD(meaningneo4jwould never start).It is meant to be run on its own:# interactive with terminal docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j # as daemon running in the background docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4jAnd then used by another image, with a--link neo4j:neo4jdirective.
I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.when I run my image I publish both ports (could I do this with EXPOSE?):docker run -d --publish=7474:7474 --publish=8000:8000 linkuriousbut only my server seems to run. if I hithttp://[ip]:7474/I get nothing. is there something special I have to do to make sure they both run?* Edit I *here's my Dockerfile:FROM neo4j/neo4j:latest RUN apt-get -y update RUN apt-get install -y git RUN apt-get install -y npm RUN apt-get install -y nodejs-legacy RUN git clone git://github.com/Linkurious/linkurious.js.git RUN cd linkurious.js && npm install && npm run build CMD cd linkurious.js && npm start* Edit II *to perhaps help explain my quandary, I've askeda different question
Running 2 services
The followingdocker composefile will startup Drupal connected to another container running Mysqldb: image: mysql environment: - MYSQL_ROOT_PASSWORD=letmein - MYSQL_DATABASE=drupal - MYSQL_USER=drupal - MYSQL_PASSWORD=drupal volumes: - /var/lib/mysql web: image: drupal links: - db:mysql ports: - "8080:80" volumes: - /var/www/html/sites - /var/www/privateNote that the drupal container usesdocker links. This will create a /etc/hosts entry called "mysql". Use this instead of "localhost" when running the drupal install screens.NoteThe docker compose file syntax has changed since this answer was originally drafted.Here is the updated syntaxversion: '2' services: mysql: image: mysql environment: - MYSQL_ROOT_PASSWORD=letmein - MYSQL_DATABASE=drupal - MYSQL_USER=drupal - MYSQL_PASSWORD=drupal volumes: - /var/lib/mysql web: image: drupal depends_on: - mysql ports: - "8080:80" volumes: - /var/www/html/sites - /var/www/private
I'm fairly new to docker, and I've just been going through the CMS's to see how easy they are to configure. So far, Wordpress and Joomla check out.When I run the drupal container linked to mysql, I get to the drupal installation screen and where it says to connect the DB, and I use my root credentials and db host being 'localhost', and receive errors trying to connect. I've attached an image to show you the output.drupal-config-db-output-errorThe error I get :Failed to connect to your database server. The server reports the following message: SQLSTATE[HY000] [2002] No such file or directory.Any help on this would be great. I tried to see if I could access the physical the volume with the config files, but I couldn't find them using Kitematic.Thank you!
Docker : Drupal container linked to mysql container can't connect to mysql during Drupal installation
If you want to have that behavior in Kubernetes you can use ahostPathvolume.Essentially you specify it in your pod spec and then the volume is mounted on the node where your pod runs and then the file should be there in the node after the pod exits.apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: image:cli name: test-container volumeMounts: - mountPath: /home/dir name: test-volume volumes: - name: test-volume hostPath: path: /out type: Directory
I have a docker image that uses a volume to write files:docker run --rm -v /home/dir:/out/ image:cli argswhen I try to run this inside a pod the container exit normally but no file is written.I don't get it.The container throw errors if it does not find the volume, for example if I run it without the-voption it throws:Unhandled Exception: System.IO.DirectoryNotFoundException: Could not find a part of the path '/out/file.txt'.But I don't have any error from the container. It finishes like it wrote files, but files do not exist.I'm quite new to Kubernetes but this is getting me crazy.Does kubernetes prevent files from being written? or am I missing something obvious?The whole Kubernetes context is managed by GCP composer-airflow, if it helps...docker -v: Docker version 17.03.2-ce, build f5ec1e2
Unable to write file from docker run inside a kubernetes pod
docker exec ismainly for debugging purpose.The primary use case ofdocker execis debugging running containers,docker execbasically is for "exceptional" casesWhen you want to execute a command (here a python program), it is best to run a container just for that command.alias dr='docker run -v /home/ganaraj/nndetect:/detect -w /detect/prediction -it --rm opecv3'That way, without having python installed on your host, you could usedetermined_rosalindsimply by typing:dr ./prediction 1.pngThat would launch atransientcontainer to run the python program, exit and be removed (--rmoption).
I am attempting to run opencv through docker container. I have built the image and while running the container directlydocker run -v /home/ganaraj/nndetect:/detect -ti opecv3 bashand accessing the bash$>cd /detect/prediction $>prediction 1.jpg 0I do get the output I am expecting ( the final 0 ).But I would actually wish to run this as a command line program.I have tried bothdocker run -v /home/ganaraj/nndetect:/detect -ti opecv3 /detect/prediction/prediction 1.pngdocker run -v /home/ganaraj/nndetect:/detect -ti opecv3 /detect/prediction/prediction /detect/prediction/1.pngBut both of these dont provide me the output I am expecting from this.What would be the right way to do this, so that I can run this app easily like a command line tool ( through docker ) and get the output back ?I have also trieddocker run -v /home/ganaraj/nndetect:/detect -it -d opecv3 bin/bashand then :docker exec -it 3d618d63316c /detect/prediction/prediction /detect/prediction/1.pngbut still I get the same blank response.Client: Version: 1.8.3 API version: 1.20 Go version: go1.4.2 Git commit: f4bf5c7 Built: Mon Oct 12 05:37:18 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.3 API version: 1.20 Go version: go1.4.2 Git commit: f4bf5c7 Built: Mon Oct 12 05:37:18 UTC 2015 OS/Arch: linux/amd64
passing file to docker command
Check out the Git repository outside the Docker build process; ideally, put theDockerfilein the root directory of the repository itself.COPYthe contents of the repository into the image.There are two big problems with trying to dogit cloneinside a Dockerfile:If you have a private repository (which you often do) you need to get the credentials into Docker space to do the clone, and once you do, it's trivial for anyone to get them back out viadocker historyordocker run.docker buildwill remember that it's already run a step in a previous build cycle, and so it won't want to repeat thegit clonestep, even if the upstream repository has changed.It's also helpful for occasional testing to be able to build an image out of something that's not checked in (yet) and having thegit clonehard-coded in the Dockerfile keeps you from ever being able to do that.
I have Git and Docker on a remote Linux machine. The source code of my project is in a bare repo. I need a way of making the source code from this repo available to Docker during the build process.Below is what I have now (which is basically the default template in VS 2017 for a Docker ASP.NET Core project).Q: How do I make the code from a bare repo available? Is clone the best option here? My attempts probably fail because of auth-issues but since the repo is on the same machine I assume it should be possible to access it straight away without using ssh in this case? Can I make this path visible/accessible to the Docker process somehow?FROM microsoft/aspnetcore:2.0 AS base WORKDIR /app EXPOSE 80 FROM microsoft/aspnetcore-build:2.0 AS build WORKDIR /src RUN git clone ssh://user@gitserver/volume1/git/project // fails RUN git clone /volume1/git/project // fails COPY Test.sln ./ COPY Test/Test.csproj Test/ RUN dotnet restore -nowarn:msb3202,nu1503 COPY . . WORKDIR /src/Test RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "Test.dll"]
How to get source code from git repo using DockerFile
thedocker runcommand supports most ofDockerfilecommands, among which theVOLUMETheVOLUMEinstruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.Thedocker runcommand initializes the newly created volume with any data that exists at the specified location within the base image.usually, this is calledanonymous volumeAnonymous volumes have no specific source so when the container is deleted, instruct the Docker Engine daemon to remove them.when--volumeis given a single argument, the behavior creates an empty directory within the containerWhen the host directory of a bind-mounted volume doesn’t exist, Docker will automatically create this directory on the host for you.
On this pagehttps://mherman.org/blog/dockerizing-an-angular-app/,At some point in this tutorial there is this command to launch the container:$ docker run -v ${PWD}:/app -v /app/node_modules -p 4201:4200 --rm example:dev.I don't understand the-v /app/node_modulespart. What is the purpose of-vwhen there are no source and destination split by a colon mark?I've been reading official documentation:https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only;https://docs.docker.com/storage/bind-mounts/;andhttps://docs.docker.com/storage/volumes/.I don't see an example and explanation aboutdocker run -v [some absolute path directory by itself] [container image name].I understand the behaviour of:docker run -v [the source, an absolute path on the host to map to a destination within the container]:[the destination, an absolute path in the container] [container image name];anddocker run -v [the source, a docker volume name to map to a destination within the container]:[the destination, an absolute path in the container] [container image name].But I don't get what is the expected behaviour of that:docker run -v [some absolute path directory by itself] [container image name].What is-v [some absolute path directory by itself], like-v /app/node_modules; how does it articulate between the host and the container?
'docker run' using mount volume option '-v' with single directory as parameter (no source and destination split with colon mark (":"))
Try providing an absolute path, instead of a relative path:-v /home/projects/package.json:/user/src/app/package.json
I have the following command:docker run --privileged=true -it --rm \ -w /usr/src/app \ -v ./package.json:/usr/src/app/package.json \ -v .bowerrc:/usr/src/app/.bowerrc \ -v ./bower.json:/usr/src/app/bower.json \ -v ./build/npm.tmp/node_modules:/usr/src/app/build/npm.tmp/node_modules \ -v ./build/npm.tmp/bignibou-client/src/bower_components:/usr/src/app/build/npm.tmp/bignibou-client/src/bower_components \ digitallyseamless/nodejs-bower-grunt bashI end up withpackage.jsonbeing adirectoryin the docker containerinstead of a file.root@c706711a7ad4:/usr/src/app# cat package.json/ cat: package.json/: Is a directoryHow can I sort this problem? What am I getting wrong with the syntax?edit:Using the advice from @manojlds works fine:Changing to-v $(pwd)/package.json:/usr/src/app/package.json \sorts out the issue.
Docker volume command line option mistaking files for directories
If you look at the Dockerfile for jenkinsci/blueocean, for example,1.23.2. You can see that the "jenkins" user is uid 1000 and gid 1000. It is these IDs that have to match for volume access, not the username.Rather than granting uid/gid 1000 access to /var/run/docker.sock on the host, perhaps it would be better to run the container as the user/group that has permission. You can check that withid -u rootandid -g docker, then use that with yourdocker runcommand, for example (assuming root uid is 0),docker run -u 0 .... See thedoc pagefor more examples of how to use-u/--user. If you're running as the same uid as root in the container, you probably won't have a problem, but if if that is a different id, you may run into issues as other uids might be missing necessary configuration to be able to run the Jenkins stuff correctly.If youreallywant to go the route of changing /var/run/docker.sock, then the answer would be to create a group with gid 1000 and add root to that group I guess.
This question is different from the following questions:Docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sockBecause they didn't consider jenkins to be installed as docker container, here I don't have jenkins user to give that user access to this file.And also from this onedocker.sock permission deniedBecause I don't know which user I got this error for, Here the userroothas access to this file but the error happened again.Here's my problem:I want to run dockerjenkinsci/blueoceanimage using following command on ubuntu:docker container run \ --name jenkins-blueocean \ --rm \ --detach \ --publish 8181:8080 \ --publish 50000:50000 \ --volume jenkins-data:/var/jenkins_home \ --volume jenkins-docker-certs:/certs/client:ro \ --volume /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueoceanAfter running jenkins on dokcer container when I use agent as follows:agent { docker { image 'maven:3-alpine' } }I got following error:Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=maven&tag=3-alpine: dial unix /var/run/docker.sock: connect: permission deniedHere when I use this command it will solve the problem:chmod 777 /var/run/docker.sockBut I don't want to permit all users to access this socket because of security vulnerabilities.I should also say that the current user is root and it has access to/var/run/docker.sockHere are some useful information:echo $USER root ls -ls /var/run/docker.sock srw-rw---- 1 root docker 0 Jul 24 14:56 /var/run/docker.sock groups root dockerWhich user should I permit access to this file? jenkins is run on container and there is no jenkins user on my system, How can I find out which user is trying to access this socket file/var/run/docker.sockand consequently I got this error?
How to find out which user is accessing /var/run/docker.sock that will cause permission denied error
I see few moments that could be the reason of your problem.There should be no=at az container create options--registry-login-server --registry-password and --registry-usernamehttps://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az_container_create-examplesCommand should look likeaz container create --resource-group FRONT-SELECT-NA2 --registry-login-server jfrogtraining-docker-dev.jfrog.io --registry-username svc-faselect --registry-password "..." --file ads-azure.yaml
from Azure we try to create container using the Azure Container Instances with prepared YAML. From the machine where we executeaz container createcommand we can login successfully to our private registry (e.g private.dev on JFrog Artifactory ) after entering passworddocker login private.dev -u svc-faselect Login succeededWe have YAML file for deploy, and trying to create container using theazcommand from the SAME server.az container create --resource-group FRONT-SELECT-NA2 --registry-login-server="private.dev" --registry-username=svc-faselect --registry-password="..." --file ads-azure.yaml An error response is received from the docker registry 'private.dev'. Please retry later.I have only one image in my YAML file. I am having real big problem to debug why this error is returned since Error response does not provide any useful information. Search among the similar network issues but without success:https://learn.microsoft.com/en-us/azure/container-registry/container-registry-troubleshoot-access
Azure Container - can not login to private registry "Error response received from the docker registry"
Thanks to all. My ports were reversed:> docker run -p 9191:80 my:apache
I'm a newbie at docker. I'm creating a Hello, World example. All I'm trying to do is bring up Apache in a docker and then view the default website from the host machine.DockerfileFROM centos:latest RUN yum install epel-release -y RUN yum install wget -y RUN yum install httpd -y EXPOSE 80 ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]And then I build it:> docker build .And then I tag it:docker tag 17283f566320 my:apacheAnd then I run it:> docker run -p 80:9191 my:apache AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this messageIt then runs....In another terminal window, I attempt to issue thecurlcommand to view the default web site.> curl -XGET http://0.0.0.0:9191 curl: (7) Failed to connect to 0.0.0.0 port 9191: Connection refused > curl -XGET http://localhost:9191 curl: (7) Failed to connect to localhost port 9191: Connection refused > curl -XGET http://127.0.0.1:9191 curl: (7) Failed to connect to 127.0.0.1 port 9191: Connection refusedor I try localhostJust to make sure that I got the port correct, I run this:> docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5aed4063b1f6 my:apachep "/usr/sbin/httpd -D F" 43 seconds ago Up 42 seconds 80/tcp, 0.0.0.0:80->9191/tcp angry_hodgkin
docker: Says connection refused when attempting to connect to a published port
Managed to get it fixed up. The fix included adding a new user using the Dockerfile . The user automatically receives 1000:1000 as UID and GID but that can be swapped for others if so desired...The Dockerfile is then run as the user with the USER commandAll the directories the USER uses need to be chown -R and to be chmod 2775 -R (or any other, but either 2 or 4 in front so that they inherit permissions from the host folder)Also make sure that you expose and create all needed volumes or else qbittorrent will not start. Creating a /Downloads/temp was essential here or else it gave an error because it couldn’t create its own because it’s not running as root.The Dockerfile is available here:https://github.com/TheCreatorzOne/qbittorrent/blob/master/DockerfileThe Ansible file is used in the PlexGuide Automation Project, so it is available to look at there.
So, I'm trying to get into creating docker images and I managed to get one going. It was qBittorrent, everything went fine until it started downloading files. All of qBits' directories are owned by1000:1000but as soon as it starts downloading a file, my docker-host machine says that the file folder is owned byroot:root.How can I make sure that everything the container creates is owned by1000:1000?I need it to be owned by that because other Docker containers, such as Radarr, need to access the files to import them and right now I'm getting permissions errors.I've tried doing achown -randsetgidon the host machine but the files keep getting created and owned by root...I'm open to all suggestions :) Thanks!My Dockerfile:https://github.com/TheCreatorzOne/qbittorrent/blob/master/Dockerfile
Docker container creating directories owned by root, I need them owned by 1000:1000
For anyone seeing this in the future, I ended up solving this, and the answer is onServer Fault.
I've attempted to compile cURL with HTTP/2 support by followingthis tutorial. I'm usingDockerand my application is based off theofficial PHP Docker image, which uses Debian, although I've produced the same problems in an Ubuntu machine running inside aVagrantVM.There appears to be no problem at first. Indeed, runningcurl --versionshows everything I'd expect:curl 7.47.1 (x86_64-pc-linux-gnu) libcurl/7.47.1 OpenSSL/1.0.1k zlib/1.2.8 libidn/1.29 libssh2/1.4.3 nghttp2/1.7.1 librtmp/2.3 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp Features: IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSocketsAlso, I can connect tohttps://nghttp2.orgjust fine:curl --http2 -I https://nghttp2.org HTTP/2.0 200 date:Mon, 15 Feb 2016 18:02:34 GMT content-type:text/html content-length:6680 last-modified:Thu, 11 Feb 2016 14:29:49 GMT etag:"56bc9add-1a18" link:; rel=preload; as=stylesheet accept-ranges:bytes x-backend-header-rtt:0.000581 server:nghttpx nghttp2/1.8.0-DEV via:1.1 nghttpx strict-transport-security:max-age=31536000 x-frame-options:SAMEORIGIN x-xss-protection:1; mode=block x-content-type-options:nosniffThe problems begin when trying to connect to Apple's recently re-launchedAPNS Provider APIwhich now runs on HTTP/2.I've installed curl via Homebrew on my Mac (using--with-nghttp2) and I can get the following (expected) response:curl -d 'Hello' --http2 https://api.push.apple.com/3/device/test {"reason":"Forbidden"}However, if I try to run the same command from within my Docker image, I get:curl -d 'Hello' --http2 https://api.push.apple.com/3/device/test ?@@?HTTP/2 client preface string missing or corrupt. Hex dump for received bytes: 504f5354202f332f6465766963652f746573742048545450I'm unsure why this problems seems to be specific to Apple's service, and what needs to be done to remedy the situation.Any help would begreatlyappreciated!
Connecting to Apple's APNS using cURL with HTTP\2 support via nghttp2
TheDownwardAPI provides the way to expose information to containers.Description from thedocs:This page shows how a Pod can use environment variables to expose information about itself to Containers running in the Pod. Environment variables can expose Pod fields and Container fields.Referthis exampleto see how information is exposed as the environment variables on the containers.apiVersion: v1 kind: Pod metadata: name: dapi-envars-fieldref spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "sh", "-c"] args: - while true; do echo -en '\n'; printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE; printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT; sleep 10; done; env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName restartPolicy: NeverThe script running inside the container can then access this information from the environment variable.
Having a .NET Core API running inside a docker container on a Kubernetes cluster: how to tell the API the name of the pod.Is there anyway to forward or inject the information?This could be usefull to pin issues down to pods by enriching logs or connections.
Show Kubernetes pod name from docker container running a .Net Core API
As an implementation detail, Docker actually uses the Linux kernel filesystem mount facility whenever it mounts a volume. To mount a volume it has to be mounted on to a directory, so if the mount target doesn't already exist, it creates a new empty directory to be the mount point. If the mount point is itself inside a mounted volume, you'll see the empty directory get created, but the mount won't get echoed out.(If you're on a Linux host, try runningmountin a shell while the container is running.)That is:/container_root/appis a bind mount to/host_path/app; they are they same underlying files.mkdir /container_root/app/node_modulescreates/host_path/app/node_modulestoo.Mounting something else on/container_root/app/node_modulesdoesn't cause anything to be mounted on/host_path/app/node_modules....which leaves an empty/host_path/app/node_modulesdirectory.The first time you start a container, and only then, if you mount an empty volume into a container, the contents from the image get copied into the volume. You're telling Docker this directory contains criticaldatathat needs to be persisted for longer than the lifespan of the container. It is not a magic "don't use the host directory volume" knob, and if you do things like change yourpackage.jsonfile, Docker will not update the contents of this volume.
I have the following in mydocker-compose.ymlfile:volumes: - .:/var/www/app - my_modules:/var/www/app/node_modulesI do this because I don't havenode_moduleson my host and everything gets installed in the image and is located at/var/www/app/node_modules.I have two questions regarding this:An empty folder gets created in my host namednode_modules. If I add another volume (named or anonymous) to the list in my.ymlfile, it'll show up in my host directory in the same folder that contains my.ymlfile. Fromthisanswer, it seems to have to do with the fact that there's these two mappings going on simultaneously. However, why is the folder empty on my host? Shouldn't it either a) contain the files from the named volume or b) not show up at all on the host?How does Docker know to check the underlying/var/www/app/node_modulesfrom the image when initializing the volume rather than just saying "Oh,node_modulesdoesn't exist" (since I'm assuming the host bind mount happens before the named volume gets initialized, hence/var/www/appshould no longer have a folder namednode_modules. It seems like it even works when I create a samplenode_modulesfolder on my host and a new volume while keepingmy_modules:/var/www/app/node_modules—it seems to still use thenode_modulesfrom the image rather than from the host (which is not what I expected, although not unwanted).
Why does docker-compose create an empty host folder when mounting named volume?
If used Testcontainers inside the Bitbucket pipelines, There might be some issues. For instance, some issues like mentioned above. This issue can be fixed putting by following commands intobitbucket-pipelines.ymlHere the basic command is an environment variable.TESTCONTAINERS_RYUK_DISABLED=true. The full pipeline might be like this:pipelines: default: - step: script: - export TESTCONTAINERS_RYUK_DISABLED=true - mvn clean install services: - docker definitions: services: docker: memory: 2048
I configured bitbucket-pipelines.yml and usedimage: gradle:6.3.0-jdk11. My project built on Java11 and Gradle 6.3. Everything was Ok till starting test cases. Because I used Testontainers to test the application. Bitbucket could not start up the Testcontainer. The error is:org.testcontainers.containers.ContainerLaunchException: Container startup failedHow can be fixed the issue?
Testcontainer issue with Bitbucket pipelines
as @tkausl and @JohnKugelman pointed out files can be copied by the following command syntax:RUN cp
How can I copy a file from a path within the container to a different pathwithin that same containerin a Dockerfile during the docker-compose build process?COPY This copies a file from the host outside the container. I don't know how to havespecify a file inside the container to copy.
Copy file(s) within the container in Dockerfile
Unfortunately, it's not possible to specify an alternative name for the.envfile. I've wanted to be able to swap that out too, but you can seein the source codethat's it's coded directly to.env.Is there a reason you want to split out substitution variables per service? It feels like you should be able to put the variables for all of your services into a single.envfile.
I have a situation where it would be nice to have multiple .env files, one for each service in my docker-compose.yml. Is there any way to specify a different filename to use? Can this be done on the level of individual services?I attempted to use theenv_filetag, unfortunately, this sets variables for use in Dockerfile and at run-time. The .env file, on the other hand, sets variables to be expanded in docker-compose.yml.
Is it possible to use different .env for different services?
You should create a named container by running the following command:docker run --name dtest ubuntu /bin/bash -c "echo Starting;sleep 20;echo Stopping"Then create the following upstart script (pay attention to the-aflag) which will manage the lifecycle of this container as you expectstart on runlevel [2345] stop on runlevel [!2345] respawn script /usr/bin/docker start -a dtest end scriptI would also suggest to add the-rflag to the main docker daemon execution script, so that docker will not automatically restart your containers when the host is restarted (instead this will be done by the upstart script)sudo sh -c "echo 'DOCKER_OPTS=\"-r=false\"' > /etc/default/docker"The process of configuring a Docker containers to work with process managers like upstart is described in a great detailhere
I have an upstart script (say,/etc/init/dtest.conf)start on runlevel [2345] stop on runlevel [!2345] respawn script DID=$(docker.io run -d ubuntu /bin/bash -c "echo Starting;sleep 20;echo Stopping") docker.io attach $DID end scriptWhen issuingstart dtest, the upstart logs show the proper cycle of "Starting ... Stopping" forever.However, if I issue astop dtest, then it appears to exit properly, but the container will run for the remainder of the sleep time (as evidenced by runningdocker.io psevery second).Shouldn't there be an easy way to run a docker image in a container with upstart and have its lifecycle be managed there?My ideal script would be something like this:start on runlevel [2345] stop on runlevel [!2345] respawn exec docker.io run -d ubuntu /bin/bash -c "echo Starting;sleep 20;echo Stopping"Environment:This is on AWS, using Ubuntu 14.04 in a T2.micro, withapt-get install -y docker.iobeing the only thing installed
Upstart script to run container won't manage lifecycle
With Gitlab you are able touse a docker-runner.When you use the docker-runner, and not a shell runner, a docker-like image and its services have to initiate, it should give an error if something fails.Chekthis docs from gitlab:This is a classicymlfrom that web:default: image: name: ruby:2.2 entrypoint: ["/bin/bash"] services: - name: my-postgres:9.4 alias: db-postgres entrypoint: ["/usr/local/bin/db-postgres"] command: ["start"] before_script: - bundle install test: script: - bundle exec rake specAs you see, the test sections will be executed after building the image, so, you should not have to worry about.Gitlab should detect any errors when loading the imageIf you aredoing it with the shell gitlab-runner, you should call the docker image start like this:stages: - dockerStartup - build - test - deploy - dockerStop job 0: stage: dockerStartup script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests [...] //your jobs here job 5: stage: dockerStop script: docker stop whatever
I'm usingGitlab CI/CDto build Docker images of our Node server.I am wondering if there is a way to test thatdocker runof the image was ok.We've had few occasions where the Docker builds but it is missing some files/env variables and it fails to start the server.Is there any way to run thedockerimage and test if it is starting up correctly in the CI/CD pipeline?Cheers.
Check docker run in Gitlab CICD pipeline
To enable SSL with the current version of laradock (as of Nov 2019) with a self signed certificate you must enable it in the nginx settings. Inside the folder nginx/sites remove the comments below line 6 "# For https" :# For https listen 443 ssl default_server; listen [::]:443 ssl default_server ipv6only=on; ssl_certificate /etc/nginx/ssl/default.crt; ssl_certificate_key /etc/nginx/ssl/default.key;restart nginx :docker-compose restart nginxand you're ready.If google-chrome complains you can enable the flag atchrome://flags/#allow-insecure-localhostto allow even invalid certificates.
I need your help to set my Laradock (with Docker) using Nginx and SSL "fake" certificate on my local machine.I have no idea how to setup it. Could you please help me?Thanks
SSL certificate on local Laradock Nginx project
I think the problem is not the sed command itself, it's related to the wrong file you mentioned for it./usr/local/etc/php-fpm.d/zz-docker.confthis is the file you are trying to change the port in it but inside your docker-compose file you are mapping something else./docker/php-fpm/config/www.conf:/usr/local/etc/php-fpm.d/www.conf
I'm usingphp-fpmwhich runs for default on the port9000. The problem's that I have other docker container based onphp-fpm, so I need to change the default port to another one, in order to not confusenginx.This is myDockerfile:FROM php:8.0.2-fpm-alpine RUN sed -i 's/9000/9001/' /usr/local/etc/php-fpm.d/zz-docker.conf WORKDIR /var/www/html CMD ["php-fpm"] EXPOSE 9001I tried to use thesedcommand to replace the port9000with9001.Inside mydocker-composefile I have this configuration:version: '3.9' services: php-fpm: container_name: app restart: always build: context: . dockerfile: ./docker/php-fpm/Dockerfile ports: - "9001:9000" volumes: - ./src:/var/www/html - ./docker/php-fpm/config/www.conf:/usr/local/etc/php-fpm.d/www.conf - ./src/public:/app/public - ./src/writable:/app/writable nginx: image: nginx:stable-alpine container_name: nginx restart: always volumes: - ./src:/var/www/html - ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf - ./docker/nginx/sites/:/etc/nginx/sites-available - ./docker/nginx/conf.d/:/etc/nginx/conf.d depends_on: - php-fpm environment: VIRTUAL_HOST: ${HOST} LETSENCRYPT_HOST: ${HOST} LETSENCRYPT_EMAIL: ${EMAIL}as you can see I have exposed the port9001also in thedocker-composefile.The filedefault.confavailable withinconf.dfolder contains this:upstream php-upstream { server php-fpm:9001; }the problem's that for some reason, when I load my site I get the error 500. So this means that the stream doesn't send any signal. If I change to port9000everything works, but the stream is wrong 'cause it's the content of another application.How can I correctly change the default port?
How to change php-fpm default port?
You can do it like this in admin shell:iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')) cinst -y git
I have an angular UI and a nodejs api. I am currently running windows server 2016 TP4 in Azure.Here are the steps I go through:I am able to remote in, create images, create containers based off those images, and attach to those containers no problem.I pulled a nodejs image from docker:docker pull microsoft/nodeand then created a container from that image:docker run --name 'my_api_name' -it microsoft/node cmdThat command takes me into the container via a windows command prompt. I typepowershellwhich takes me into a powershell shell and i can run npm commands.My question is,how do I install git onto this container?I want to reach out to the repository holding my app, pull it down and run it in this container. I will eventually push this container image up to the docker registry so clients can pull it down and run it on their windows env.
Docker on Windows Server 2016 TP4 Downloading git in container through powershell
The problem is the command:$ docker --registry-mirror=http:// -dIs intended for configuring the Docker daemon, not the Docker client. In boot2docker (which is what you're presumably using), this means you need to log into the boot2docker VM and run those commands there.You can log into the boot2docker VM withboot2docker ssh. Whilst you could just stop the daemon and restart with the new commands, it's best to edit the file/var/lib/boot2docker/profilewhich will be used each time boot2docker restarts. Just add something like:EXTRA_ARGS="--registry-mirror=http://"If you then restart boot2docker, you should be good to go.
I use MAC OS X.And I want to use mirror in thistutorial, its step 1 is need to do this:docker --registry-mirror=http:// -dBut, when I use this command in terminal, it did't work:flag provided but not defined: --registry-mirror See 'docker --help'.then, I use the other way in tutorial:you may be able to add the --registry-mirror options to the DOCKER_OPTS variable in /etc/default/dockerI don't know where to add this DOCKER_OPTS. I want to use mirror in client 1.7.0. Can anyone tell me how to set up the mirror?.I use this command to create mirror:docker run -d -p 5000:5000 -e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/Users/v11/Documents/docker-registry --restart=always --name mirror -e STANDALONE=false -e MIRROR_SOURCE=https://registry-1.docker.io -e MIRROR_SOURCE_INDEX=https://index.docker.io registryI test it and find it didn't work like it describe that can download from local registry. Even if I fail to use this command :docker push localhost:5000/batman/ubuntuThis command can work before, I really don't know what happened. Maybe the flag "STANDALONE=false" affect? I want to setup mirror, can anyone tell me how to do.Thanks.
How to create docker registry mirror
You need to expose TCP port 9000 of the PHP container to made other containers able to use it (seeWhat is the difference between docker-compose ports vs expose):php: image: php:7-fpm expose: - "9000" ...Do you really want your sites to be available on TCP port 8080, not the standard port 80? If not, change"8080:80"to"80:80".Besides the PHP handler, use a default location (although your site should be workable even without it, it is a bad practice to not add it to your nginx config):location / { try_files $uri $uri/ =404; }
I have a very simple config in docker-compose withphp:7-fpmandnginxthat I want to use to host simple php websites. But I am having issues getting it available in production.Can someone please tell me what I did wrong?Here is docker-compose.prod.yml:version: '3.8' services: web: image: nginx:latest ports: - "8080:80" volumes: - ../company/site:/code - ./site.prod.conf:/etc/nginx/conf.d/default.conf php: image: php:7-fpm volumes: - ../company/site:/codeHere is the site.prod.conf file:server { listen 80; index index.php index.html; server_name example.com; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /code; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass php:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } }I can compose up and the logs appear to be fine and when I run docker ps:docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c268a9cf4716 php:7-fpm "docker-php-entrypoi…" 27 minutes ago Up 16 seconds 9000/tcp example_code-php-1 beaaec39209b nginx:latest "/docker-entrypoint.…" 27 minutes ago Up 16 seconds 0.0.0.0:8080->80/tcp, :::8080->80/tcp example_code-web-1Then checking the ports, I think this looks fine:netstat -tulpn | grep :80 tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 204195/docker-proxy tcp6 0 0 :::8080 :::* LISTEN 204207/docker-proxy
Docker-Compose with PHP and Nginx not working on production
If you are running containers in docker, you can add cron tasks on the docker host machine to execute commands in the docker containers.For example, to run 'stress' application in your container every 5 minutes you can add the following (substituting your container ID of course) to your crontab:*/5 * * * * docker exec c78ddbed4ad9 /bin/sh -c 'stress -d 1 --hdd-bytes 64M --cpu 1 --io 2 --vm 2 --vm-bytes 64M --timeout 60s' >> /tmp/cronstress.log 2>&1I am running this as root user on the docker host.or just run cron:root@dockerhost:cron
I've got some cronjobs in my debian docker container. They don't start automatically why?Do I have to do some workarounds?
Cronjobs in Docker container how get them running?
The image you linked to is private. Did you do adocker loginor create a.dockercfg filebeforedocker run?(BTW, I linked to an outdated commit in the docker source repo for the authentication file since it seems to be broken in the current documentation.)
I have an automated docker build set up and the build appears to be working fine but when I try to run it I get this error:Unable to find image 'dtwill/ddcintegrationdevenvs:blkmesa_esrbtmq' locally Pulling repository dtwill/ddcintegrationdevenvs 2014/09/11 14:33:20 Error: image dtwill/ddcintegrationdevenvs not foundRun command:docker run -i -p 9200:9200 -p 9300:9300 -p 9001:9001 -p 15672:15672 --rm -t dtwill/ddcintegrationdevenvs:blkmesa_esrbtmqI'm trying to test:a. docker looks for image locally b. if image is not found locally that docker will successfully pull and run imageImage is validhttps://registry.hub.docker.com/u/dtwill/ddcintegrationdevenvs/
automated docker build run error: Unable to find image
MacOS has some mounting problems because of the differences inuserandgroupowning the file versususerandgroupmodifying/reading the file. As a workaround, do the following (preferably using the latest version) of Docker,$ brew install docker-machine-nfs $ docker-machine start yourdockermachine $ docker-machine-nfs yourdockermachine --shared-folder=/Users --nfs-config="-alldirs -maproot=0"You can change the name ofyourdockermachineas you like. Also, you have the ability to change the shared folder you want to map. The above option is the best bet and works in all cases. I would suggest not changing that so that you don't mess around with system files.After the above setup, make sure you provide appropriate read, write, execute permissions to your files and folders.NOTE: Dependencies for above procedure arebrew,docker-machine(or the complete docker toolbox for simplicity)UPDATE 1:Docker for Mac Betais in private invite phase. It runs Docker natively on Mac on top ofxhyve Hypervisor. It would mean, no more permission errors and improved performance.UPDATE 2:Docker for Macis now in Public Beta. The underlying technology remains the same and the VM is completely managed by the Docker service. The version as of this writing is1.12.0-rc2which works seamlessly with OS X without any intervention ofdocker-machine.
I have a container running a PHP application and thephp-fpmservice can't write the cache files inside an folder provided by a docker volume.I gave 777 permissions on the folder I need to write, from the host machine, but it works just for a while. Files created byphp-fpmdoesn't have necessary permissions. Furthermore it's not possible to change the owner and group withchowncommand.This is mydocker-composer.ymlfile:web: image: eduardoleal/nginx external_links: - proxy links: - php:php container_name: "app-web" environment: VIRTUAL_HOST: web.app volumes_from: - data volumes: - ./src/nginx-vhost.conf:/etc/nginx/sites-enabled/default php: image: eduardoleal/php56 container_name: "app-web-php" volumes_from: - data data: container_name: "app-web-data" image: phusion/baseimage volumes: - /Users/eduardo.leal/Code/vidaclass/web:/var/www/publicI'm running docker on OSX with VirtualBox.
How to handle permission inside a volume from docker?
Because you have only one pod on 10.1.10.110Your curl is wrong, you didn't deploy a pod on 111 and 112 nodes, this is the reason that the endpoints aren't working. Just executecurl http://10.1.10.110:31520on the other nodes and it will work
I've installed kubernetes cluster with help of Kubespray. Cluster having 3 Nodes (2 Master & 1 Worker). node1 - 10.1.10.110, node2 - 10.1.10.111, node3 - 10.1.10.112$ kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 12d v1.18.5 node2 Ready master 12d v1.18.5 node3 Ready 12d v1.18.5I deployed this pod in node1 (10.1.10.110) and exposed nodeport service as shown.NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default pod/httpd-deployment-598596ddfc-n56jq 1/1 Running 0 7d21h 10.233.64.15 node1 --- NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/httpd-service NodePort 10.233.16.84 80:31520/TCP 12d app=httpdService description$ kubectl describe services -n default httpd-service Name: httpd-service Namespace: default Labels: Annotations: Selector: app=httpd Type: NodePort IP: 10.233.16.84 Port: 80/TCP TargetPort: 80/TCP NodePort: 31520/TCP Endpoints: 10.233.64.15:80 Session Affinity: None External Traffic Policy: ClusterQuestion: I can able to access the service from node1:31520 (where the pod actually deployed) but can't able to access the same service from other nodes (node2:31520 (or) node3:31520)$curl http://10.1.10.110:31520 It Works! but if I curl with other node IP, timed out response $curl http://10.1.10.111:31520 curl (7): Failed connect to 10.1.10.111; Connection timed out $curl http://10.1.10.112:31520 curl (7): Failed connect to 10.1.10.112; Connection timed outCan anyone suggest what I am missing ?
Why I cant access a kubernetes pod from other Nodes IP?
Probably you are trying to bind docker container port 22 to already occupied port 22 of your host. You need to map container ssh server to other not occupied port of your host e.g. 5000.You can expose particular ports during container start-up with flag "-p HOST_PORT:CONTAINER_PORT":docker run -p 127.0.0.1:5000:22 docker_imageThen you should be able to reach container's GIT server:git add container[email protected]:5000/folder git push container branch_name
On my host machine, I'm able to create a second user:adminandpushto that user's git folder using:admin@localhost:folderWhen I create a Docker container hosting a git server, after exposingport 22how do Igit pushfrom my local machine tolocalhost:22which would be the location of the container's ports?
How do I push to a git server on a container?
There are two options I know of.First, you can have buildx run builds on multiple nodes, one for each platform, rather than using qemu. For that, you would usedocker buildx create --appendto add the additional nodes to the builder instance. The downside of this is you'll need the nodes accessible from the node runningdocker buildxwhich likely doesn't apply to ephemeral cloud build environments.The second option is to use the experimentaldocker manifestcommand. Each builder would push a separate tag. And at the end of all those, you would usedocker manifest createto build a manifest list anddocker manifest pushto push that to a registry. Since this is an experimental feature, you'll want toexport DOCKER_CLI_EXPERIMENTAL=enabledto see it in the command line. (You can also modify~/.docker/config.jsonto have an"experimental": "enabled"entry.)
I am using circleci to deploy an application, I deploy to both amd and arm architectures so my builds are multi-arch which I have been using docker buildx for. With the new arm support from circleci I was able to cut the time on this process down from sometimes 3 hours using quemu, to around 20 minutes by building both separately in their respective build environments (no need to use quemu when you build on the target arch). What I am running into is that when I run the buildx commands, one build will complete, push it's results to the repository and then the other completes and overwrites the previous. What I am trying to achieve is combining the built images into a single manifest to push together as if I built them at the same time. Is there a way to achieve what I am attempting without getting into direct modification of the manifest files? An example of the commands needed to achieve this would be extremely helpful!Thanks in advance!
can you combine separate builds from docker?
I found a solution. One of my docker container produces 9GB of logs. You can just clear those logs manually.Sort the/var/lib/docker/containersdirectory to show which container directories have the largest log files:du -d1 -h /var/lib/docker/containers | sort -hClear the contents of a log filecat /dev/null > /var/lib/docker/containers/container_id/container_log_nameSource:https://success.docker.com/article/no-space-left-on-device-error
My VM where I installed docker is full, so the docker daemon stopped. Now I saw a lot of solutions on how to fix it, but the problem is all of them requiring that the docker daemon is running (docker system prune). But when I want to start the docker daemon, it can't and this message appears:Error starting daemon: Unable to get the TempDir under /var/lib/docker: mkdir /var/lib/docker/tmp: no space left on deviceIs there another way to clear the space?
Docker taking all space 100%
Eventually, I foundthis commentwhich led me tothis blog post, in which I learned C++ debugging is disallowed on docker by default.The arguments --cap-add=SYS_PTRACE and --security-opt seccomp=unconfined are required for C++ memory profiling and debugging in Docker.I added--cap-add=SYS_PTRACE --security-opt seccomp=unconfinedto thedocker runcommand, and the debugger was able to connect.
This took me full days to find, so I am posting this for future reference.I am developing C++ on a docker image. I am using clion.My code is compiled in debug mode, and runs fine in run mode, but when trying to debug, the process exits immediately with the very informativeProcess finished with exit code 1When switching the debugger fromtoTrying to debug still exits, but yields a popup in clion'A packet returned error 8'The same code debugs fine on a local computer.Thedocker runcommand isRUN_CMD="docker run --group-add ${DOCKER_GROUP_ID} \ --env HOME=${HOME} \ --env="DISPLAY" \ --entrypoint /bin/bash \ --interactive \ --net "host" \ --rm \ --tty \ --user=${USER_ID}:${GROUP_ID} \ --volume ${HOME}:${HOME} \ --volume /mnt:/mnt \ $(cat ${HOME}/personal-uv-docker-flags) \ -v "${HOME}/.Xauthority:${HOME}/.Xauthority:rw" \ --volume /var/run/docker.sock:/var/run/docker.sock \ --workdir ${HOME} \ ${IMAGE} $(${DIR}/impl/known-tools.py cmd-line ${TOOL})"How to debug C++ on docker?
gdb exits immediately `Process finished with exit code 1` or lldb `'A packet returned error 8'` on docker. Also: How to allow c++ debugging in docker
Official PHP Docker Image use/usr/local/etc/phpas base folder: seeDockerfile.
I'm creating a Symfony environment (PHP-FPM, Nginx, & more) with Docker & Docker-compose.But, PHP does not use my php.ini and ignores the config (date.timezone parameter is not found in my Symfony application).Of course, when I go on my container, the date.timezone is correctly set in the 2 php.ini (cli & FPM).I don't understand why, but it works if I put my php.ini in/usr/local/etc/php/folder (wtf)Did I miss something?docker-compose.yml :nginx: image: nginx volumes: - "./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro" links: - "php:php" ports: - "80:80" - "443:443" working_dir: "/etc/nginx" php: build: docker/php volumes: - ".:/var/www:rw" working_dir: "/var/www"Dockerfile :FROM php:5-fpm ENV DEBIAN_FRONTEND noninteractive RUN apt-get update && \ apt-get install -y php5-common php5-fpm php5-cli php5-mysql php5-apcu php5-intl php5-imagick && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* RUN sed -i 's/;date.timezone =/date.timezone = "Europe\/Paris"/g' /etc/php5/fpm/php.ini RUN sed -i 's/;date.timezone =/date.timezone = "Europe\/Paris"/g' /etc/php5/cli/php.ini RUN usermod -u 1000 www-data
PHP date.timezone not found with Docker & PHP-FPM
TL;DR: Almost all modifications you need to perform to nodes in GKE, like adding trusted root certificates to the server, can be done using Daemonsets.There is an amazing guide that the userSam Stoelingacreated about how to perform what you are looking to do. The link can be foundhere.As a summary, the way Sam propose how to perform this changes is by distributing the cert in each of the nodes by using a Daemonsets. Since the Daemonsets guarantees that there is 1 pod on each of the nodes always, the POD will be in charge of adding your certificate to the node so you can pull your images from the private registry.Normally adding the node by your own will not work since if GKE needs to recreate the node you change will be lost. This approach of using DS guarantees that even if the node is recreated, since the Daemonset will schedule one of this "overhaul pod" in the node, you will always going to have the cert in place.The steps that Sam proposed are very simple:Create an image that with the commands needed to distribute the certificate. This step may be different if you are using Ubuntu nodes or COS nodes. If you are using COS nodes, the commands that your pod needs to run if you are using COS are perfectly outlined by SAM:cp /myCA.pem /mnt/etc/ssl/certs nsenter --target 1 --mount update-ca-certificates nsenter --target 1 --mount systemctl restart dockerIf you are running Ubuntu nodes, the commands are outlined in several posts in Ask Ubuntu likethis one.Move the image to a container registry that your nodes currently have access likeGCR.Deploy the DS using the image that adds the cert as privileged with the NET_ADMIN capability (needed to perform this operation) and mount host's "/etc" folder inside the POD. Sam added an example of doing this that may help, but you can use your own definition. If you face problems while trying to deploy a privileged pod, may worth taking a look to GKE documentation aboutUsing PodSecurityPolicies
I have a kubernetes cluster in GKE. Inside the cluster there is an private docker registry service. A certificate for this service is generated inside a docker image by running:openssl req -x509 -newkey rsa:4096 -days 365 -nodes -sha256 -keyout /certs/tls.key -out /certs/tls.crt -subj "/CN=registry-proxy"When any pod that uses an image from this private registry tries to pull the image I get an error:x509: certificate signed by unknown authorityIs there any way to put the self signed certificate to all GKE nodes in the cluster to resolve the problem?UPDATEI put the CA certificate to each GKE node as @ArmandoCuevas recommended in his comment, but it doesn't help, still getting the errorx509: certificate signed by unknown authority. What could cause it? How docker images are pulled into pods?
How to put self-signed certificate to each node of GKE cluster?
I found mysolutionthis morning based on theunauthorized: incorrect username or passworderror.When I checked Docker for Windows in the system tray, I was logged in using my Docker Hub email address. Docker for Windows finds this acceptable, which is misleading. Doing so causes trouble for apps it interacts with like VS2017. Using my Docker Hub username instead solved it.
I'm getting started with Docker and familiar with .NET Core and Visual Studio 2017. I've created a new Web Application (Razor Pages) named "WebApplicationCore21" with Docker Support enabled and receive a nice Dockerfile out the gate.FROM microsoft/dotnet:2.1-aspnetcore-runtime-nanoserver-1709 AS base WORKDIR /app EXPOSE 62911 EXPOSE 44323 FROM microsoft/dotnet:2.1-sdk-nanoserver-1709 AS build WORKDIR /src COPY WebApplicationCore21/WebApplicationCore21.csproj WebApplicationCore21/ RUN dotnet restore WebApplicationCore21/WebApplicationCore21.csproj COPY . . WORKDIR /src/WebApplicationCore21 RUN dotnet build WebApplicationCore21.csproj -c Release -o /app FROM build AS publish RUN dotnet publish WebApplicationCore21.csproj -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "WebApplicationCore21.dll"]While the project builds ok, iterrors on run(F5), stating:Description: The DOCKER_REGISTRY variable is not set. Defaulting to a blank string. Project: docker-compose File: Microsoft.VisualStudio.Docker.Compose.targets Line: 363Steps verified:Enabled Hyper-V on motherboard and in Windows 10 ProDocker for Windows installedCan login both Docker Hub and clientSwitched Docker client to Windows containers from LinuxI've also noticed that although I can log into Docker client using my hub credentials, attempting to rundocker loginin PowerShell and using the same username/password produces the following:Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or passwordMaybe this is helpful or perhaps unrelated; all I need is to run from VS2017.
VS2017 Build Fail - DOCKER_REGISTRY
Use --network="host" in your docker run command, then 127.0.0.1 in your docker container will point to your docker host. Refer to the answers inthis post.
How can I connect a Spring Boot (JAR) application, running in Docker, to my MySql server on my computer? [I tried different posts, but that didn't help]In my Spring Boot 'application.properties' I have:spring.datasource.url = jdbc:mysql://localhost:3306/geosoldatabaseI tried a number of options:$ docker run -p 8080:8080 --name containername imagename $ docker run --net="host" -p 8080:8080 --name containername imagename $ docker run -p 8080:8080 --add-host=localhost:192.168.99.100 --name containername imagenameBut alas, I cannot get connection to the MySql server. Hibernate fails. On my CAAS provider this all works nicely - of course with a known container name.My Dockerfile is very simple:FROM fabric8/java-jboss-openjdk8-jdk ENV JAVA_APP_JAR myapplication.jar ENV AB_OFF true EXPOSE 8080 ADD target/$JAVA_APP_JAR /deployments/As suggested, environment variables can also be used. This is what I've done so far:Define in Windows10 environment settings screen, I define the following environment variables: [1] DATABASE_HOST=127.0.0.1:3306 and [2] DATABASE_NAME=mydbnameI changed the application.properties file as suggested: spring.datasource.url = jdbc:mysql://${DATABASE_HOST}/${DATABASE_NAME}In the Docker Quickstart screen after I type "docker push... " I get the same errors. This time the cause is different:Caused by: java.net.UnknownHostException: ${DATABASE_HOST}: Name or service not known.To check whether the environment variables are correctly set, I type: "echo ${DATABASE_HOST}" and I get the value "127.0.0.1:3306".Update: suggested was to put the 'docker-machine ip' address into the database_host variable. The cause was now a bit different:Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Unable to open JDBC connection for schema management target
Docker - Spring Boot application - cannot access MySql server on localhost
Ok, I have found a solution. Basically MongoDb has a feature that allow to setup access security (--auth) but permit localhost connection. Seemongo local exception.So this is my final script:# Create a container from the mongo image, # run is as a daemon (-d), expose the port 27017 (-p), # set it to auto start (--restart) # and with mongo authentication (--auth) # Image used is https://hub.docker.com/_/mongo/ docker pull mongo docker run --name YOURCONTAINERNAME --restart=always -d -p 27017:27017 mongo mongod --auth # Using the mongo "localhost exception" add a root user # bash into the container sudo docker exec -i -t YOURCONTAINERNAME bash # connect to local mongo mongo # create the first admin user use admin db.createUser({user:"foouser",pwd:"foopwd",roles:[{role:"root",db:"admin"}]}) # exit the mongo shell exit # exit the container exit # now you can connect with the admin user (from any mongo client >=3 ) # remember to use --authenticationDatabase "admin" mongo -u "foouser" -p "foopwd" YOURHOSTIP --authenticationDatabase "admin"
I want to create a docker container with a mongodb configured with client access control (user authentication, seethis).I have successfully configured a docker container with mongo usingthis image. But it doesn't use mongo access control.The problem is that to enable access control I have to run mongodb with a specific command line (--auth) but only after creating the first admin user.With a standard mongodb installation I normally perform these steps:run mongod without--authconnect to mongo and add the admin userrestart mongo with--authHow I'm supposed to do it with docker? Because mongo image always start without--auth. Should I create a new image? Or maybe modify the entry point?Probably I'm missing something, I'm new to docker...
Mongodb docker container with client access control
This note is hidden in theextendeddocker rundocumentation:Note:A process running as PID 1 inside a container is treated specially by Linux: it ignores any signal with the default action. So, the process will not terminate onSIGINTorSIGTERMunless it is coded to do so.The main Docker container process (your container'sENTRYPOINTorCMDor the equivalent specified on the command line) runs as process ID 1 inside the container. This is normally reserved for a special init process and is special in a couple of ways.Possibly the simplest answer is to let Docker inject an init process for you as PID 1 by adding--initto yourdocker runcommand.Alternatively, on Node you can register asignal eventto explicitly handleSIGINT. For example, if I extend your script to haveprocess.on('SIGINT', function() { process.exit(); });and then rebuild the image and re-run it, it responds to ^C.
Taken from Docker's documentation that if you want to run a standalone NodeJS script you are supposed to use the following command:docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node:8 node your-daemon-or-script.jsThis works except that it's not possible to stop the script using Ctrl-C. How can I achieve that?Here is my script.js:console.log('Started - now try to kill me...'); setTimeout(function () { console.log('End of life.'); }, 10000);
What is the easiest way to run a single NodeJS script using Docker and be able to terminate it with Ctrl-C
This was happening to me because Pycharm was holding onto stale containers, running (remove pycharm containers):docker ps -a | grep -i pycharm | awk '{print $1}' | xargs docker rm, then doinginvalidate cache and restartin pycharm fixed it for me.
I am attempting to run the pycharm debugger inside a docker container. This has worked for me in the past, so I know my config is right, but during one of my docker container purges, I must have deleted a container pycharm needs, pycharm_helpers. Unfortunately pycharm can't recover and pull/rebuild the correct images itself, and I can't successfully repull the image manually (yes, I'm logged in to docker registry). Any way to reset this mess, or pull pycharm_helpers? I have tried cache invalidate/restart already
Pull access denied for pycharm_helpers, repository does not exist or may require 'docker login'
You can put the command bellow on yourDockerfileto solve this problemRUN echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
When I run this code below inside containerDockerrunning Java JDK 8 onAlpine Linuximport java.io.*; import java.util.*; import java.net.*; public class SomaDBTest { public static void main(String... args) throws Throwable { InetAddress ip = InetAddress.getByName("mysql"); System.out.println("Begin - mysql IP Addr = " + ip.getHostAddress()); . . . } }I get the error:Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8 Exception in thread "main" java.net.UnknownHostException: mysql: unknown error at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at java.net.InetAddress.getByName(InetAddress.java:1076) at SomaDBTest.main(SomaDBTest.java:52)Any Ideas ?By the way, I can run theping mysqlandnslookupcommand successfully.# ping mysql PING mysql (172.17.0.2): 56 data bytes 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.185 ms 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.283 ms 64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.424 ms # nslookup mysql Server: (null) Address 1: ::1 localhost Address 2: 127.0.0.1 localhost Name: mysql Address 1: 172.17.0.2 mysqlMy Dockerfile is very simple:FROM frolvlad/alpine-oraclejdk8 ADD bin / WORKDIR /The filesSomaDBTest.javaandSomaDBTest.classis inbindirectory.To run the container you do :docker build -t testInetAddress . docker run -i -t testInetAddress java SomaDBTest
java.net.InetAddress java class doesn't resolve IP on Alpine Docker container
I looked into the image ofadamantium/flutterand saw in the Dockerfile that it depends onubuntu:18.04which is shipped with Python2 directly, as mentioned in PEP-394 (see the link below for more information on this).https://www.python.org/dev/peps/pep-0394/So, I don't understand why you want to re-install it again. What happened is that you have used a Dockerfile that installs another version of Python2 in/usr/local/bin/and overwrites the symbolic link that points to the original Python2 as you can see indocker buildlogsif test -f /usr/local/bin/python -o -h /usr/local/bin/python; \ then rm -f /usr/local/bin/python; \ else true; \ fi (cd /usr/local/bin; ln python2.7 python)You can then verify the current Python interpreter within the container:root@9b9176e6c26c:/# which python /usr/local/bin/python root@9b9176e6c26c:/# python --version Python 2.7Meanwhile, I have removed the part which installs Python2 from the Dockerfile and got this.root@e6dd827dac1d:/# which python /usr/bin/python root@e6dd827dac1d:/# python --version Python 2.7.15rc1Then import what you want directly:root@e6dd827dac1d:/# python -c "import firebase_admin" root@e6dd827dac1d:/# echo $? 0You can see that it succeeded by returning code 0.Dockerfile after modification:FROM adamantium/flutter RUN apt-get update && \ apt-get install -y wget && \ apt-get install -y build-essential && \ apt-get install -y zlib1g && \ apt-get install zlib1g-dev && \ wget https://www.python.org/ftp/python/3.6.7/Python-3.6.7.tgz && \ tar xvzf Python-3.6.7.tgz && \ cd Python-3.6.7 && \ ./configure && make && \ make install RUN apt-get install -y python-pip && \ pip install firebase-admin
When I try to importfirebase_admininpython 2.7I get the error:ImportError: No module named google.authThis is theDockerFileI'm using.I've installed Python from the source code usingwget https://www.python.org/ftp/python/2.7/Python-2.7.tgz tar xvzf Python-2.7.tgz cd Python-2.7 ./configure make make installThen I've installed pip and firebase admin by running:apt-get install -y python-pip pip install firebase-adminThen I ranimport firebase_admininside the python shell. I got the error:ImportError: No module named google.authI've runpip show google.authand got the following output:Name: google-auth Version: 1.6.3 Summary: Google Authentication Library Home-page: https://github.com/GoogleCloudPlatform/google-auth- library-python Author: Google Cloud Platform Author-email:[email protected]License: Apache 2.0 Location: /usr/local/lib/python2.7/dist-packages Requires: cachetools, six, pyasn1-modules, rsaI've runecho $PYTHONPATHand got this:/usr/local/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packagesThat means thegoogle.authis installed and its directory is in thePYTHONPATH, why python can't find it? and how to fix it?
ImportError: No module named google.auth
I'm not sure when you set this up, but there is an updated permission model afterGitLab 8.12when using GitLab runners and logging into the GitLab Container registry.Asper the docs, you can do:before_script: - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
I have a docker registry setup on my gitlab server. Here is my .gitlab-ci.yml file:image: docker:18.05.0-ce services: - docker:dind stages: - build - test - release variables: TEST_IMAGE: http://my.gitlab.ip:4444/path/to/project:$CI_COMMIT_REF_NAME RELEASE_IMAGE: http://my.gitlab.ip:4444/path/to/project:latest before_script: - docker login -u $USERNAME -p $PASSWORD http://my.gitlab.ip:4444 build: stage: build script: - docker build --pull -t $TEST_IMAGE . - docker push $TEST_IMAGE # ... # more commandsI am using a secret variable for my username and password. When I push code and the runner runs through this file, I get the following error:WARNING! Using --password via the CLI is insecure. Use --password-stdin. Error response from daemon: Get https://my.gitlab.ip:4444/v2/: http: server gave HTTP response to HTTPS clientSo I tried using--password-stdininstead like this:docker login -u $USERNAME --password-stdin $PASSWORDhttp://my.gitlab.ip:4444And I get this error:"docker login" requires at most 1 argument. See 'docker login --help'. Usage: docker login [OPTIONS] [SERVER] [flags]Edit:I have also tried this for my docker login command:docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRYand received this error:WARNING! Using --password via the CLI is insecure. Use --password-stdin. Error response from daemon: Get https://my.gitlab.ip:4444/v2/: http: server gave HTTP response to HTTPS clientI have made the following changes on my gitlab server:In /etc/default/docker:DOCKER_OPTS="--insecure-registry http://my.gitlab.ip:4444"In /etc/docker/daemon.json:{ "insecure-registries" : ["http://my.gitlab.ip:4444"] }I have also done the same on my gitlab runner (different server).Why is it showing that I'm using https in the error and how do I change it to http?
docker login using -p gives error, and when I switch to --password-stdin like it recommends still gives error - gitlab-ci
From the top of my head I'd blame a change inmatrixStats[ but see below and it appears blameless ] -- I am somewhat familiar with all the other moving parts and not aware of changes or bugs.One thing that is fishy though is the trailing line break:RUN install.r Rcpp RcppEigen matrixStats \You may try without it.Edit:And for what it is worth I just fired up our standard base layer Docker imager-baseviadocker run --rm -ti r-base /bin/bashand invokedinstall.r Rcpp RcppEigen matrixStatswhich executed just fine.So if sonething is wrong with that other Docker container you may have to take it up with its author and work through his changes relative to our Dockerfile he seems to have used as a base.
On its last line,thisDocker file callslittler::install.rto installRcppRcppEigenandmatrixStats.The whole code was working like a charm a couple of months back. Now, it bombs at that last step. More precisely,RcppandRcppEigenstill install perfectly, but when it comes to installingmatrixStats, I get:installing to /usr/local/lib/R/site-library/matrixStats/libs ** R ** inst ** byte-compile and prepare package for lazy loading ** help *** installing help indices ** building package indices ** installing vignettes ** testing if installed package can be loaded Error in get(name, envir = asNamespace(pkg), inherits = FALSE) : object 'checkCompilerOptions' not found Calls: ::: -> get Execution halted ERROR: loading failed * removing ‘/usr/local/lib/R/site-library/matrixStats’ The downloaded source packages are in ‘/tmp/downloaded_packages’ Warning message: In install.packages(f, lib, if (isMatchingFile(f)) NULL else repos) : installation of package ‘matrixStats’ had non-zero exit statusIt's an error I never had before and have trouble locating where it is even coming from. What could be causing this problem? Any info would already help a lot.
checkCompilerOptions Error while installing package (littler/Docker)
To find one container from another, you can use a 'service discovery' mechanism such asSkyDock.Skydock - Automagic Service Discovery for DockerSkydock monitors docker events when containers start, stop, die, kill, etc and inserts records into a dynamic DNS server skydns. This allows standard DNS queries for services running inside docker containers.For the more complex case where your containers are on multiple hosts and you need a way to network them together, seeweave-dns(Please note I work on weave and weave-dns).
From the docker linkingI can have A container links to B container.Then I can see the B's ip address and exposed port in A's ENV variables.However, how can I figure out A's ip address wihtin B container?
docker linking how can both containers know each others ip
According to your settings, all logs .log.1, .log.2 are stored in /var/lib/docker/containers/... and as per docker documentation you can change those settings for docker indaemon.json:"log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3",in /var/log/containers you can find link to the last created log file.As per documentation forflunetd: You should consider usingin_tailoption:in_tail is included in Fluentd's core. No additional installation process is required. When Fluentd is first configured with in_tail, it will start reading from the tail of that log, not the beginning. Once the log is rotated, Fluentd starts reading the new file from the beginning. It keeps track of the current inode number.Please refer to the similarcommunity post
If the max-file value is set to 2, two files are created as shown below.11111-json.log 11111-json.log.1But here, when the11111-json.logfile size ismax-size, the contents of11111-json.logare moved to11111-json.log.1, and the size of11111-json.logBecomes zero. /var/log/container At this point I lose the last log.The log in the/var/log/containerpath eventually links to/var/lib/docker/containers/~, so if the file mentioned above works that way, the log will be lost.How can't I be lost?
docker log driver "json-file" log loss when rolling update
You can control the speed at which a deployment proceeds by setting the following parameters:deploymentConfiguration(specifically, theminimumHealthyPercentin your case)enabling health checks (withload balancer health checksif you are using a load balancer or withcontainer health checks)settinghealthCheckGracePeriodSeconds(for load balancer health checks) orstartPeriod(for container health checks) to account for the start up synchronization time.
We have Docker-based ECS services where once the process is up, it needs to synchronize application state before it is ready to start serving requests. This can take some time (a number of seconds after the process starts).When using ECS Services, changing the task definition version triggers a rolling replacement of the tasks (good), but it does it too quickly. Once a task reaches aRUNNINGstate, the next task is killed. ButRUNNINGjust means the process is started, it doesn't mean it's met all its own internal requirements to be ready to do work... in this case, not ready to serve requestsThis entire update process happens so quickly that in some cases, all the old tasks are killed before any of the new tasks have finished loading their state, and we end up with an outage.What is the best or correct way to ensure ECS Services doesn't terminate old/hot tasks until the new tasks are actually hot & fully online, and not simply that the container process is running?
Ensure ECS only kills old tasks when new ones are ready
TryFROM ubuntu:18.04 RUN \ apt-get update \ && apt-get install -y \ git-coreI'm not sure, but emptylines in Dockerfile was problem
I am fairly new to docker and am trying to learn by writing my own images and, for now, reading Docker in action (ISBN: 1633430235)In both my own code and an example from the book (pg 146) I would like to install git via a dockerfile.My code:# set base image FROM ubuntu:18.04 # author MAINTAINER me ############## Begin installation ########################## # update and upgrade RUN apt-get update RUN apt-get upgrade # install git RUN apt-get install -y git ***rest of code omitted***Books code:#An example Dockerfile for installing Git on Ubuntu FROM Ubuntu:latest MAINTAINER "[email protected]" RUN apt-get install -y git ENTRYPOINT ["git"]However, in both I am getting an unable to locate package eror with a none-zeero code:Reading package lists... Building dependency tree... Reading state information... E: Unable to locate package git The command '/bin/sh -c apt-get install -y git' returned a non-zero code: 100So far I have tried.1) Making apt-get update and apt-get upgrade into a single command. When I do this it fails at this point.2) Installing apt-transport-https before installing git, as described here:apt-get update' returned a non-zero code: 100This will succeed in downloading but as soon as it gets to installing git again, I get the same error. This3) Following the tutorial onhttps://docs.docker.com/engine/reference/builder/#dockerfile-examples. Although this is different it still installs the x11 server, again this also fails at installation.4) Following another answer on stack overflow,Cannot install packages inside docker Ubuntu imageI have tried to install curl.Al these methods, with the exception of the first, let me update & upgrade only to fail once I try to install software. I also have no issues if I am updating, upgrading, installing from a terminal on the machine I am running docker from.Any advice as to how I can rectify this would be greatly appreciated.
Apt-get not working within ubuntu dockerfile
If it can help anyone, I think the problem not coming from Docker, (I tried to update my registry with no success) but fromsystemdstart command.So, it's not a clean solution, but it still cleaner than launching docker in a screen.I modify/lib/systemd/system/docker.serviceand change theExecStartline :ExecStart=/usr/bin/docker --insecure-registry myregistry:5000 -d -H fd://Then,systemctl daemon-reload systemctl restart dockerAnd that's it. It works. I can pull & push to my distant private registry.
I have an issue with docker 1.5.So, I run a private registry at myregistry:5000. I can push & pull from an other location (debian 7 & docker 1.4) with :DOCKER_OPTS="--insecure-registry myregistry:5000"in /etc/default/dockerNow, I have a new system with docker 1.5 and debian 8, it's not working anymore. I tried all possibilities like,--insecure-registry=myregistry:5000or--insecure-registry http://myregistry:5000Any clue?(Note : It works well if I stop docker and launchdocker -d --insecure-registry myregistry:5000)
docker registry with --insecure-registry and docker 1.5
I checked on Windows 10 1809 running non-HyperV (process isolation) containers, I'm pretty sure its the same for Windows Server containers.The data seems to be kept in:C:\ProgramData\Docker\windowsfilter\{ContainerId}There's a direct reference to the folder indocker inspect {Id}underGraphDriver\Data\dir.The folder contains filesandbox.vhdxwhich appears to be the "writable layer" of each container.I wasn't able to open it and view the filesystem, but if I write some data inside the container I can force the file to grow:docker exec powershell get-childitem c:\ -recurse `> c:\windows\temp\test.txtThe layer persists when the container is stopped/restarted, and the folder is removed when the container isrmed.While researching I saw anopen PR in mobyto improve cleanup of this folder.
I'm curious if there's a way to see how much disk space a running Windows container is using in addition to the layers that are part of the container's image. Basically, how much the container "grew" since it was created.In Linux (Or Linux containers running in a HyperV), this would bedocker ps -s, however that command isn't implemented on Windows containers. I also trieddocker system df -vbut also, not implemented. Perhaps there's a hacky way by looking at a certain directly on disk or something?
Is there a way to see container disk usage on Docker for Windows?
By using the ports option in your docker-compose you are exposing that port externally. If you remove the ports option from tomcat it should still work since the container should expose the port 8080 internally only for the docker network.
I´m working with a Docker compose file to create a nginx and tomcat container. nginx will be used as a reverse proxy so then it access the tomcat. I was able to do this successfully with 2 separate contaienrs using azure container instaces. you hit the URL of the nginx and then you got redirected to tomcat securley as nginx has HTTPS and the certificates. Till that everything ok. But the issue is that If you access individualy the tomcat Ip by http yo can also access the container and that is not secure. I just want that tomcat be access by nginx. How can this be achieved? Im trying to use a docker compose now which I know containers will be using same network and can connect to each other, but how can I achieve that the nginx connects to the Tomcat and that tomcat only can be access by the redirection of the nginx over https and that tomcat is unable to be access by http individually . This is the YML I have, But i don´t know how to manage ports to achieve what I want. Is this possible?version: '3.1' services: tomcat: build: context: ./tomcat dockerfile: Dockerfile container_name: tomcat8 image: tomcat:search-api ports: - "8080:8080" nginx: build: context: ./nginx dockerfile: Dockerfile container_name: nginx image: nginx:searchapi depends_on: - tomcat ports: - "80:80" - "443:443"
docker compose nginx and tomcat, expose only nginx on internet
By default, root only has access from the localhost, 127.0.0.1 & ::1, you need to specifically allow access from 192.168.99.1 or from anywhere using '%' in the user setup : see herehttp://dev.mysql.com/doc/refman/5.5/en/default-privileges.html
I'm usinghttps://github.com/sameersbn/docker-mysqlto run a mysql container usingdocker-machinein OSX with virtualbox.I created a new machinedocker-machine create --driver virtualbox mytestThe IP isdocker-machine ip mytest 192.168.99.103I run the container like this:docker run -p 3306:3306 --name mysql -d \ -v /opt/mysql/data:/var/lib/mysql \ -e 'DB_USER=sampleuser' -e 'DB_PASS=samplepass' -e 'DB_NAME=sampledb' -e 'DB_REMOTE_ROOT_NAME=root' -e 'DB_REMOTE_ROOT_PASS=samplerootpass' \ sameersbn/mysql:latestNow, when I try to connect to the mysql in the container from my hostmachine I can connect using usersampleuserbut not as userroot.▶ mysql -u root -p -h 192.168.99.103 Enter password: ERROR 1045 (28000): Access denied for user 'root'@'192.168.99.1' (using password: YES)192.168.99.1is my local laptops ip address▶ ifconfig | grep "192" inet 192.168.99.1 netmask 0xffffff00 broadcast 192.168.99.255
How to connect to mysql running in container from host machine
I've been working for a while with Bamboo and remote agents on Docker and I happened to have tried to achieve to same thing. To simply answer your question :No, this is not viable in my opinion. What you are trying to do is basically what Atlassian calls elastic agents but it only works for AWS for now.In the current situation there is no way to spawn new agents when a build is queued. However, what you can do is to setup a first stage for each of your plans that will start the Docker container needed to perform the second stage. Your next stage will need to have Bamboo dependencies set so that only the Docker container you have spawned would be able to take care of it.While this would work, let me tell you what flaw you should expect by doing this :Next stages will be ran inside a Docker container, killing the container itself at the end of your build will lead to a Bamboo build failure. You can do it in a last stage ran from an agent on the host but if your build fails for some reason you will never get to that stageConcurrency build and error checking will be far more difficult to handleWhat we ended up doing where I work was just to pay for a license allowing more agents and ran them in their own Docker container for more agent isolation. You can expect to loose more money by wasting your time setting up such system than to just pay for more agents.If your problem is less about money and more about having to deal with lots of different configuration, you can consider running Docker container at the beginning of a stage and forward all the command to it usingdocker exec. This will still not be easy to set up though.
Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed5 years ago.Improve this questionI'm trying to see if it would be viable to automatically spin up Bamboo containers for a CI build enviromentIdeally I want any number of random containers to be able to spin up automatically and destroy themselves for a build without having to do any tinkering on the remote server with docker compose.We have tons of different projects with different mishmashes of dependencies. So when a dev runs a build, my goal is that a container specific to that build should come up, add itself to the list of viable remote agents, run that build, and than destroy itself.Has anyone tried anything similar or have any advice to see if this is viable?Thanks
Running Remote Bamboo Agents on Demand Using Docker [closed]
swarmis an orchestrator just likekubernetes.docker servicedeploys services toswarmjust as you deploy your services tokubernetesusingkubectl.swarmis essentially built-in primitive orchestrator. One possible case for replicas is running a proxy that directs requests to proper containers. You could expose multiple machines and have one take place of another in case another fails. Or any other high availability case you could think of.Your question could be rephrased as "What's the difference between running a single container and running containers in a cluster?", which would be another question altogether, but that rephrasing might help illustrate whatdocker servicedoes.
Thisquestion illustrates the theoretical differences betweendocker runanddocker service.What I don't understand is when would one need to use the exact same container replicated multiple times (as per theDocker documentation example)? There, they run the same web app replicated 5 times.Is deployment on Kubernetes (for example) a potential use case, where the developer does not want to centralize the app on one host, in order to make it more resilient, hence why 5 replicas are created?To understand, can someone please please with an example use case, where thedocker serviceis useful?
Why use docker service?
SOLUTIONFirst: Replace thedocker taganddocker pushcommands with:heroku container: push web -a That's when I discovered that theheroku registryconnection was not set up.Command to configure:heroku container: loginThis command only works with windows on the default terminal (does not work incmderwith bash).or the commanddocker login --username = _ --password = $ (heroku auth: token) registry.heroku.comNow, just carry out the following commands.Pushheroku container: push web -a //example: heroku container: push web -a sample-web-carlosReleaseheroku container: release web -a //example: heroku container: release web -sample-web-carlosread more:https://github.com/heroku/heroku-container-registry/issues/45https://devcenter.heroku.com/articles/container-registry-and-runtimeThank mohsin Mehmood for your help!
ContextI'm trying to deploy aaspnet coresample-app onHerokuwith docker but is not working.repo:https://github.com/mykeels/sample-web-apiguide:https://blog.devcenter.co/deploy-asp-net-core-2-0-apps-on-heroku-eea8efd918b6EnvorimentFramework.NET Core 2.1.201SO:W10 Build 17134.1Docker:Docker for Windows Version 18.03.1-ce-win65(17513)Steps I diddotnet publishdocker buildHeroku LoginTag and PushDocker FileApp on HerokuNot runningI Also tried this:Question: What is wrong ?
.Net Core on Heroku with Docker
Sadly you can't. The Cloud Builder image have each time their own sandbox and only the/workspacedirectory is mounted. By the way, all the environment variable, binaries installed and so, doesn't persist from one container to the next one.You have to use the shell script each time :( The easiest way is to have a file in your/workspacedirectory (for exampleenv.varfile)# load the environment variable source /workspace/env.var # Add variable echo "NEW=Variable" >> /workspace/env.varFor this, Cloud Build is boring...
I want my Cloud Build to push an image to a registry with an incremented tag. So, when the trigger arrives from GitHub, build the image, and if the latest tag was1.10, tag the new one1.11. Similarly, the1.11value will serve in multiple other steps in the build.Reading the registry and incrementing the tag is easy (in a bash Cloud Build step), but Cloud Build has no way to pass parameters. (Substitutions come from outside the Cloud Build process, for example from the Git tags, and are not generated inside the process.)This StackOverflow questionandthis articlesay that Cloud Build steps can communicate by writing files to the workspace directory.That is clumsy. But worse, this requires using shell steps exclusively, not the native docker-building steps, nor the nativeimagecommand.How can I do this?
How can Cloud Build take dynamic parameters to increment a registry tag?
Yes indeed.JIBdoesn't needDockerfileordockerd.Sharing an example below, you can just copy it intopluginssection of yourpom.xml com.google.cloud.tools jib-maven-plugin 0.9.7 true gcr.io/distroless/java gcr.io/my-gcp-project/${project.artifactId}:${project.version} gcr -Xms256m -Xmx512m -Xdebug -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap learnmake.microservices.RunApplication 8080 OCI true for more detailed example, seelearnmake-microservices
I have a Spring boot Application and using spotify plugin to Dockerize my application.So, I will have a Dockerfile like the below one.FROM jdk1.8:latest RUN mkdir -p /opt/servie COPY target/service.war /opt/service ENV JAVA_OPTS="" \ JAVA_ARGS="" CMD java ${JAVA_OPTS} -jar /opt/service/service.war ${JAVA_ARGS}I came across JIB and it looks really cool. But, struggling to get it working.I added the pom entry below. com.google.cloud.tools jib-maven-plugin 0.9.6 jdk1.8:latest docker.hub.com/test/service mvn compile jib:buildI see the following.[INFO] Building dependencies layer... [INFO] Building classes layer... [INFO] Building resources layer...When i run the docker image, it says, Jar file does not exist. I have a multi module maven project and would like to dockerize multiple module on running mvn compile jib:build from the parent pom. Any help on this?
Dockerizing multi module Spring Boot application using JIB plugin
The issue was that the server was running on127.0.0.1when it should have been running on0.0.0.0.I changed theCMDline in the Dockerfile fromCMD ["python", "manage.py", "runserver", "8000"]toCMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]and it now works.
I'm attempting to access my django app running within Docker on my windows machine. I'm using docker-machine. I've been taking a crack at this for hours now.Here's my Dockerfile for my django app:FROM python:3.4-slim RUN apt-get update && apt-get install -y \ gcc \ gettext \ vim \ curl \ postgresql-client libpq-dev \ --no-install-recommends && rm -rf /var/lib/apt/lists/* EXPOSE 8000 WORKDIR /home/ # add app files from git repo ADD . server/ WORKDIR /home/server RUN pip install -r requirements.txt CMD ["python", "manage.py", "runserver", "8000"]So that should be exposing (at least in the container) port 8000.When I use the commanddocker-machine ip defaultI am given the IP 192.168.99.101. I go to that IP on port 8000 but get no response.I went into the VirtualBox to see if forwarding those ports would work. Here is the configuration:I also tried using127.0.0.1as the Host IP. I also tried disabling the windows firewall.Here's my command for starting the container:docker run --rm -it -p 8000:8000 I am at a loss on why I am unable to connect on that port. When I rundocker-machine lsthe url it gives me istcp://192.168.99.101:2376and when I go to that it gives me some kind of file back, so I know the docker-machine is active on that port.Also when I rundocker psI get this:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5c00cc28a2bd "python manage.py run" 7 minutes ago Up 7 minutes 0.0.0.0:8000->8000/tcp drunk_knuthAny help would be greatly appreciated.
Docker-machine Port Forwarding on Windows not working