Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Your docker-compose configurations are not correct. You forgot to link servicesversion: '3'
services:
db:
image: 'postgres'
ports:
- '5432:5432'
core:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8001:8000'
volumes:
- .:/code
depends_on:
- db
links: # <- here
- db | I have aDjangoproject running in multipleDockercontainers with help ofdocker-compose. The source code is attached from directory on my local machine. Here's the compose configuration file:version: '3'
services:
db:
image: 'postgres'
ports:
- '5432:5432'
core:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8001:8000'
volumes:
- .:/code
depends_on:
- dbAlthough the application starts as it should, I can't run migrations, because every time I domanage.py makemigrations ...I receive:django.db.utils.OperationalError: could not translate host name "db" to address: nodename nor servname provided, or not knownObviously I can openbashinside thecorecontainer and runmakemigrationsfrom there, but then the migration files are created inside the container which is very uncomfortable.In my project's settings the database is configured as:DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'postgres',
'HOST': 'db',
'PORT': '5432',
}
}As docker postgres image is accessible atlocalhost:5432I tried changing database host in settings to:'HOST': '0.0.0.0'But then when firing up the containers withdocker-compose upI'm receiving:...
django.db.utils.OperationalError: could not connect to server:
Connection refused
Is the server running on host "0.0.0.0" and accepting
TCP/IP connections on port 5432?
...How should I configure database insettings.pyso thatDjangocan access it to create migrations? | Running Django migrations on dockerized project |
The problem is that the stream doesn't really stop until the container is stopped, it is just paused waiting for the next data to arrive. To illustrate this, when it hangs on the first container, if you dodocker stopon that container, you'll get aStopIterationexception and your for loop will move on to the next container's logs.You can tell.logs()not to follow the logs by usingfollow = False. Curiously, the docs say the default value is False, but that doesn't seem to be the case, at least not for streaming.I experienced the same problem you did, and this excerpt of code usingfollow = Falsedoes not hang on the first container's logs:import docker
client = docker.from_env()
container_names = ['container1','container2','container3']
for container_name in container_names:
dkg = client.containers.get(container_name).logs(stream = True, follow = False)
try:
while True:
line = next(dkg).decode("utf-8")
print(line)
except StopIteration:
print(f'log stream ended for {container_name}') | I am usingdocker-pyto read container logs as a stream. by setting thestreamflag toTrueas indicated in the docs. Basically, I am iterating through all my containers and reading their container logs in as a generator and writing it out to a file like the following:for service in service_names:
dkg = self.container.logs(service, stream=True)
with open(path, 'wb') as output_file:
try:
while True:
line = next(dkg).decode("utf-8")
print('line is: ' + str(line))
if not line or "\n" not in line: # none of these work
print('Breaking...')
break
output_file.write(str(line.strip()))
except Exception as exc: # nor this
print('an exception occurred: ' + str(exc))However, it only reads the first service and hangs at the end of the file. It doesn't break out of the loop nor raise an exception (e,g. StopIteration exception). According to the docs ifstream=Trueit should return a generator, I printed out the generator type and it shows up as adocker.types.daemon.CancellableStreamso don't think it would follow the traditional python generator and exception out if we hit the end of the container log generator and call next().As you can see I've tried checking if eol is falsy or contains newline, even see if it'll catch any type of exception but no luck.Is there another way I can. determine if it hits the end of the stream for the service and break out of thewhileloop and continue writing the next service?The reason why I wanted to use a stream is because the large amount of data was causing my system to run low on memory so I prefer to use a generator. | docker-py reading container logs as a generator hangs |
Askleufmentioned in comments, the solution to the stuck docker container in his case was the following:When i installed Kubernetes on Ubuntu 16.04 i followed a guide that
said to install "docker.io". In this article it said to remove
"docker.io" and rather use a "docker-ce or docker-ee" installation.sudo apt-get remove docker docker-engine docker-ce docker.io
sudo apt-get remove docker docker-engine docker.io -y
curl -fsSL download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce -y
sudo service docker restartBOOM, i did it, disabled the swappoff function and my troubles are no
more.I hope this helps people that are also stuck with this. | I was one that was having trouble with this above mentioned issue where after a "kubectl delete -f" my container would be stuck on "Terminating".
I could not see anything in the Docker logs to help me narrow it down.
After a Docker restart the pod would be gone and i could continue as usual, but this is not the way to live your life.I Googled for hours and finally got something on a random post somewhere.Solution:
When i installed Kubernetes on Ubuntu 16.04 i followed a guide that said to install "docker.io".
In this article it said to remove "docker.io" and rather use a "docker-ce or docker-ee" installation.BOOM, i did it, disabled the swappoff function and my troubles are no more.I hope this helps people that are also stuck with this.Cheers | A Solution to Kubernetes pods stuck on Terminating |
It is not related with docker, it can be enabled by follow configurationecho "%_binary_filedigest_algorithm 8" >> $HOME/.rpmmacrosThe reason for it is ok in standard alone RHEL 6.4 is because it has theredhat-rpm-configpackages.bash-4.1# yum install redhat-rpm-configIn the package, this configuration exists in/usr/lib/rpm/redhat/macrosbash-4.1# grep digest /usr/lib/rpm/redhat/macros
%_source_filedigest_algorithm 8
%_binary_filedigest_algorithm 8You can use commandrpmbuild --showrcto check all the configuration. | In standard alone RHEL 6.4 rpm build environment, the rpm packages is generated with SHA-256 check sum, which is gotten by commandrpm -qp --dump xxx.rpm[user@redhat64 abc]$ rpm -qp --dump package/rpm/abc-1.0.1-1.x86_64.rpm
..
/opt/company/abc/abc/1.0.1-1/bin/start.sh 507 1398338016 d8820685b6446ee36a85cc1f7387d14537d6f8bf5ce4c5a4ccd2f70e9066c859 0100750 user abcc 0
..While if it is build indockerenvironment (still RHEL6.4) the checksum is md5[user@c1cbdf51d189 abc]$ rpm -qp --dump package/rpm/abc-1.0.1-1.x86_64.rpm
..
/opt/company/abc/abc/1.0.1-1/bin/start.sh 507 1401952578 f229759944ba77c3c8ba2982c55bbe70 0100750 user abcc 0
..If I checked the real file, the file is the same[user@c1cbdf51d189 1.0.1-1]$ sha256sum bin/start.sh
d8820685b6446ee36a85cc1f7387d14537d6f8bf5ce4c5a4ccd2f70e9066c859 bin/start.sh
[user@c1cbdf51d189 1.0.1-1]$ md5sum bin/start.sh
f229759944ba77c3c8ba2982c55bbe70 bin/start.shHow I configurerpmbuildto let generated rpm file is SHA-256 based ? | How to build the rpm package with SHA-256 checksum for files? |
If I understand you correctly, you want to restore a custom format dump taken with 10.5 into a 10.3 database.That won't be possible if the archive format has changed between 10.3 and 10.5.As a workaround, you could use a “plain format” dump (option--format=plain) which does not have an “archive version”. But any problems during restore are yours to deal with, since downgrading PostgreSQL isn't supported.You should always use the same version for development and production, and you should always use the latest minor release (currently 10.13). Everything else is asking for trouble.backup as plain text like this: warning! the file will be huge. Around 17x more than regular custom format. My typical 90mb is now 1.75Gbcopy the backup file into the postgres containerdocker cp ~/path/to/dump/in-host-system/2020-07-08-1.dump :/backupsgo to the bash of your postgres containerdocker exec -it bashinside the bash of postgres container:psql -U username -d dbname < backups/2020-07-08-1.dumpThat will work | I use tableplus for my general admin.Currently using the docker postgres image at 10.3 for both production and localhost development.Because tableplus upgraded their postgres 10 drivers to 10.5, I can no longer use pg_restore to restore the backup files which are dumped using 10.5--format=customSee image for how I backup using tableplus. And how it uses 10.5 driverThe error message I get ispg_restore: [archiver] unsupported version (1.14) in file headerWhat i triedI tried in localhost to simply change the tag for postgres in my dockerfile from 10.3 to 10.5 and it didn't workoriginal dockerfileFROM postgres:10.3
COPY ./maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
&& rmdir /usr/local/bin/maintenancetoFROM postgres:10.5
COPY ./maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
&& rmdir /usr/local/bin/maintenanceMy host system for development is macOS.I have many existing databases and schemas in my development docker postgres. So I am currently stumped as to how to upgrade safely without destroying old data.Can advise?Also I think a long term is to figure out how to have data files outside the docker (i.e. inside my host system) so that everytime I want to upgrade my docker image for postgres I can do so safely without fear.I like to ask about how to switch to such a setup as well. | How to upgrade the pg_restore in docker postgres image 10.3 to 10.5 |
This usually means that the ports are not open, or a problem with the hostname!you haven't exposed the ports to the outside world, maybe add this lineservices:
mysql:
image: mariadb:${MARIADB_VERSION:-latest}
container_name: mysql
volumes:
- ./mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-password}
- MYSQL_USER=${MYSQL_USER:-root}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-db}
restart: always
ports:
- "3306:3306" | I wanna connect my python script to MySQL in docker.
Here is my docker-compose file:version: '3.7'
services:
mysql:
image: mariadb:${MARIADB_VERSION:-latest}
container_name: mysql
volumes:
- ./mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-password}
- MYSQL_USER=${MYSQL_USER:-root}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-db}
restart: always
script:
container_name: script
build: ./test_db
command: bash -c "python3 testDb.py"
environment:
- DB_NAME=db
- HOST=mysql
- DB_USER=root
- DB_PASSWORD=password
volumes:
- ./test_db:/var/www/html/script
depends_on:
- mysqland here is my Python script file:import pymysql
class TestDb:
def run(self):
conn = pymysql.connect(host='mysql', port=3306, database='db', user='root', password='password')
print(conn)
if __name__ == "__main__":
TestDb().run()and here is my error:script | Traceback (most recent call last):
script | File "/test_db/testDb.py", line 10, in
script | TestDb().run()
script | File "/test_db/testDb.py", line 5, in run
script | conn = pymysql.connect(host='mysql', port=3306, database='sta_db', user='root', password='Alphashadow1381')
script | File "/usr/local/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect
script | return Connection(*args, **kwargs)
script | File "/usr/local/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__
script | self.connect()
script | File "/usr/local/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect
script | raise exc
script | pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'mysql' ([Errno 111] Connection refused)")How can I connect my Python script to MySQL in docker? | Connect python script to mysql in docker |
What I often do is, in development, mount the source code of the application to its usual place in avolume. Then, I set the command (or entrypoint) of the container to a script that launches it in "development mode" (for example, by usingnodemonfor a node.js application, settingRAILS_ENV=developmentin Rails, and so on).Volumesdowork on Mac OS X (and I assume Windows) under boot2docker or docker-machine, with the caveat that you need to be working somewhere beneath your home directory.For a concrete example, here'sa repositorythat I set this up in. The ingredients:script/devis my "dev-mode" entrypoint. It launches the main application under nodemon.When I launch the container, Imount the source directory into the container as a volumeandsetscript/devas the command. (I'm usingdocker-composehere to launch and link in an upstream dependency, so I can do everything in one command.)With those two things in place, I can rundocker-compose up, make a source change in whatever editor I choose on my host, save the file, and the service within the container auto-reloads to bring my changes into effect. Presto! | I'm running OSX and Docker with the help of boot2docker.From my understanding boot2docker is a lightweight linux distro that is running the docker containers. I have some Ubuntu containers that I use to run and test projects that should specifically run well on Linux.However every small code change from my host text editor of choice, requires me to re-build image and re-run the container. Run the app and confirm that the change I made didn't break something.Is there a way for me to open a Docker container FS folder in a text editor from my host machine? (a.k.a Remote edit?)Have any of you guys done this? Any ideas will be awesome. I think about setuping SFTP or SSHD on the Docker container, but I would want your opinion? | Editing Docker container FS using Atom/Sublime-Text? |
Do not use0.0.0.0to bind a socket on your host. It can be a security issue. It's the way to declare all IP are able to connect tomongodbfrom any host.Better edit/etc/mongod.confand add thedockerinterface ip, like:# network interfaces
net:
port: 27017
bindIp: 127.0.0.1,172.17.0.1Then, in thedocker run, you can add a host:docker run --add-host mongohost:172.17.0.1 Now, in your container, you can querymongohoston port27017.This is the clean solution.You can also useextra_hostsif you usedocker-compose.But,Docker Compose is primarily designed for local development and testing purposes.According to the officialDocker documentation, if you insist to use it, use V2 and a specialproduction.yaml | I have a mongo on my host machine, and an ubuntu container which is also running on my machine. I want that container to connect to mongo.
I set as host url, my host ip from docker network :172.17.0.1and in the/etc/mongod.conffile I set the bindIp to0.0.0.0from the container, I can ping the host,but the mongo service is not accessible, I get that error :Connecting to: mongodb://172.17.0.1:27017/directConnection=true&appName=mongosh+1.5.0
MongoServerSelectionError: connection timed outMore over, I can connect from host to the mongo service with that command :mongosh mongodb://172.17.0.1:27017Do you know why I can't access mongo service from my container ? | can't connect mongodb on host from docker container |
I encountered the same problem and I finally solved it. The problem is when you create your peer node right now (as of July 28, 2022), the version defaults to2.3.0-v0.0.2(you can find thiskubectl hlf peer create --helpand see the description next to the--versionflag). This peer version happens to be incompatible when deployingccaas- chaincode as a service. So, the solution is to manually override the version using the--versionflag while creating the peer node. Peer version2.4.1-v0.0.4solved this for me.Please see the below command while creating apeernode fororg1.kubectl hlf peer create --statedb=couchdb --storage-class=standard --enroll-id=org1-peer --mspid=Org1MSP --enroll-pw=peerpw --capacity=5Gi --name=org1-peer0 --ca-name=org1-ca.fabric --version=2.4.1-v0.0.4 --namespace=fabricNote the above steps apply only when you are using the peer image fromquay.io/kfsoftware/fabric-peerwhich is the default image. If you want to use other images use the--imagetag. Repeat the same steps while creating every peer node. This should solve your problem. Hope this helps! | I am creating a hyperledger fabric network using the following hyperledger fabric operator for kuberneteshttps://github.com/hyperledger-labs/hlf-operatorI have my cluster configured in aws eks and it is currently running 3 nodes. I am following the documentation and so far all the steps of the implementation are working without problem, but when installing my chaincode it shows me the following message:'InstallChaincode': could not build chaincode: docker build failed: docker build is disabledValidate and change docker permissions but I don't understand what I am missing so that it can work and install my chaincode.I think it may be a permissions error in the eks, I am also validating the permissions | docker build is disabled error when installing my chaincode on hyperledger fabric |
If you haveENTRYPOINTin your Dockerfile, than theCommandgets appended as itsarguments:Specify a command to execute in the container. If you specify an Entrypoint, then Command is added as anargument to Entrypoint. For more information, see CMD in the Docker documentation.Thus your Commandmkdir -p /tmp ...will be used as an argument topython3 -m flask run --host=0.0.0.0, resulting in error. This could explain why you experience issue.I tried torecreate the issueinitially using yourCommandstructure but had some problems. What worked was usingCommandin the following way:"Command": "/bin/bash -c \"mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip\""MyDockerfiledid not haveEntrypoint. Thus to run your python you could maybe do the following (assuming everything else is correct):"Command": "/bin/bash -c \"mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip && python3 -m flask run --host=0.0.0.0\"" | I have aDockerfileand aDockerfile.aws.json:{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "5000",
"HostPort": "5000"
}],
"Volumes": [{
"HostDirectory": "/tmp/download/models",
"ContainerDirectory": "/models"
}],
"Logging": "/var/log/nginx",
"Command": "mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip"
}But when I deploy, it doesn't run theCommandthat I specified. What am I doing wrong? | How do I get a Command to run from a Dockerfile.aws.json on Elastic Beanstalk? |
You can add in the Dockerfile an argument:ARG pathIn the Azure DevOps Docker task add an argument:-task: Docker@2
inputs:
command: build
arguments: --build-arg path=$(Build.Repository.LocalPath)Now the Dockerfile know the variable value and you can use it, for example:FROM ubuntu:latest
ARG path
ECHO $pathResults:Step 3/13 : RUN echo $path
---> Running in 213dsa3dacv
/home/vsts/work/1/sBut if you will try to copy the appliaction in this way:FROM microsoft/aspnet:latest
ARG path
COPY $path/README.md /inetpub/wwwrootYou will get an error:COPY faild: CreateFile \?\C:\ProgramData\docker\tmp\docker-builder437597591\_work\1\s\README.md: The system cannot find the path specified.It's because the Docker build the image within a temporary folder and he copy the sources to there, but he doesn't copy the agent folders (_work/1/s) so the best way it's just put a relative path where the Dockerfile exist, for example (if the Dockerfile exist with the README.md):FROM microsoft/aspnet:latest
COPY README.md /inetpub/wwwroot | I want to get the data from the variableBuild.Repository.LocalPathand use it in my Dockerfile, but it shows me and error.This is my dockerfile:FROM microsoft/aspnet:latest
COPY "/${Build.Repository.LocalPath}/NH.Services.WebApi/bin/Release/Publish/" /inetpub/wwwrootI get this error:Step 2/9 : COPY "/${Build.Repository.LocalPath}/NH.Services.WebApi/bin/Release/Publish/" /inetpub/wwwroot
failed to process "\"/${Build.Repository.LocalPath}/NH.Services.WebApi/bin/Release/Publish/\"": missing ':' in substitution
##[error]C:\Program Files\Docker\docker.exe failed with return code: 1I have try a lot of ways, putting this line:COPY "/${Build.Repository.LocalPath}/NH.Services.WebApi/bin/Release/Publish/" /inetpub/wwwroot | Get the data of Build.Repository.LocalPath and used it in my DockerFile |
Raspbian stable is listed athttps://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0#time64_requirementswith an outdated version of libseccomp (quoting: ... [requiring] host libseccomp to be version 2.4.2 or greater ...). Note that for Raspbian libseccomp is known as libseccomp2. In this case: either update libseccomp and Docker, or use an older image.The issue with a non-functioning clock seems to apply to all containers based on Alpine Linux built in the last couple of weeks. In my own experience this includes PostgreSQL and Python. Both of these fail: PostgreSQL experiences a Segmentation Fault, Python fails to initialize its clock. Given that Redis is database-like, I would not be surprised if the lack of working clock breaks it as well.(This issue seems to be resolved) The arm-v7 images of Alpine Linux seem to have been built with a non-functioning time component, seehttps://gitlab.alpinelinux.org/alpine/aports/-/issues/12346. This issue should be resolved by using either an older image (eg.redis:6.0.6-alpine3.12seems to be 6 months old), waiting for a fixed build to appear, or using a build that does not use alpine. | I'm running redis in a docker container on a RasPi 4 (redis:6-alpine). It is used by Nextcloud in another container (via docker-compose).
Since a few days redis is using 100% CPU time.I now saw that the date/time in the container is corrupt. Redis seems to start normally, but the log saispi@tsht2:/data/nextcloud $ docker logs nextcloud_redis_1
1:C 03 May 2071 14:21:28.000 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 03 May 2071 14:21:28.000 # Redis version=6.0.10, bits=32, commit=00000000, modified=0, pid=1, just started
1:C 03 May 2071 14:21:28.000 # Configuration loaded
1:M 03 May 2071 14:18:00.000 # Warning: 32 bit instance detected but no memory limit set. Setting 3 GB maxmemory limit with 'noeviction' policy now.
1:M 03 May 2071 14:20:40.000 * Running mode=standalone, port=6379.
1:M 03 May 2071 14:21:28.000 # Server initialized
1:M 03 May 2071 14:21:20.000 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 03 May 2071 14:21:28.000 * Ready to accept connectionsWatch the date!When I look at the date in the container, I getpi@tsht2:/data/nextcloud $ docker exec -it nextcloud_redis_1 date
Sun Jan 0 00:100:4174038 1900I tried to stop the container, remove the image and restart anything, but I have the same problem.What happens there?
Has the 100% CPU usage something to do with the date problem?BTW: the other containers show the correct date/time. | corrupt date with redis:6-alpine on RasPi |
You want to pass the-eto the docker command. So:docker run -P -d --name spring -e "SITENAME=DOCKERlocal" spring-appAs you are doing it, you are passing it to the image entrypoint. | Here is my Dockerfile :FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y default-jdk
ADD sample-docker-1.0-SNAPSHOT.jar app.jar
EXPOSE 8080
ENV SITENAME="ASDASD"
ENTRYPOINT ["java", "-jar", "app.jar"]and here is a bit of Java code that i use:@Value("${SITENAME:testsite}")
private String siteName;with this setup everything works good and environment value of SITENAME is indeed "ASDASD". But when i try to set that variable with:docker run -P -d --name spring spring-app -e SITENAME='DOCKERlocal'it doesn't work (value is the one from Dockerfile). What am i missing here ? | Environment variables with docker run -e |
It is possible to solve this withxhost +but it would then be wise to doxhost -after you no longer use this container.In fact the more restrictivexhost +local:dockeris enough | I used to run programs with commands like this:docker run -ti \
--name wireshark \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/.Xauthority:/root/.Xauthority \
--privileged \
-d ubuntu:17.10 /bin/bashthen I could run wireshark using my Ubuntu's system's display.
Like this page's example:Running GUI App with dockerNow it is not working. When I run wireshark I get this error:root@5ad127a8333a:/# wireshark
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
No protocol specified
QXcbConnection: Could not connect to display :0
Aborted (core dumped) | Run GUI programs in Docker in Ubuntu |
I rebuild the image or restart the container, where does my database data go? Is it gone?No, the data is not gone. The only time data is removed is if you remove the container:docker rm . The only time this isn't true is if you mount a volume to the container to expose the data volume:docker run -td -p 5432:5432 -v /mydata/volume:/var/lib/postgresql/data postgres:9.5.2I want to use my database in my Flask (Docker) application, what do I need to put in my config? (DATABASE_URI, NAME etc..)This can be a subject of debate but I would use an environment variable that you set when you start the container:docker run -td -p 80:5000 -e POSTGRES_URL=172.12.20.1 mycontainer/flask:latestIn your config you would goos.getenv('POSTGRES_URL', 'localhost'). This allows you to default to localhost if the container is linked otherwise you can point it to another container running on another machine. This is better because it allows greater flexibility in your deployment.I want to back-up my database, or load data in it? Can I just connect to it?Yes, just like anything else you can connect to Postgres onIP:PORTusing the credentials you specified at container runtime. | I just started using Docker and created an image and running container with Python3, Flask, UWSGI and nginx.Now I want to use a postgresql database in Flask. I read the following page and linking containers seems logical to me. (https://hub.docker.com/_/postgres/)I still have some question or maybe the principle of Docker isn't just clear enough for me. But if I create a postgresql image and running container, and link this to my Flask application, what happens if:I rebuild the image or restart the container, where does my database data go? Is it gone?I want to use my database in my Flask (Docker) application, what do I need to put in my config? (DATABASE_URI, NAME etc..)I want to back-up my database, or load data in it? Can I just connect to it?As you may have noticed I am clearly a beginner of working with Docker, maybe I'm just misunderstanding the principle. Really appreciate it if someone could point me in the right direction! | Using a PostgreSQL database with Docker and Flask, how does it work? |
A docker host refers to the server in the client server pair. It's the instance of the dockerd engine, and where containers are run.A docker node refers to a member in a swarm mode cluster. Every swarm node must be a docker host, but not every docker host is necessarily a member of a swarm cluster. | I know that questions similar to this one is already asked on SO. But, it doesn't make clarification on what I am looking for.I am trying to get my hands dirty on docker. I have encountered the terminologydocker hostanddocker node. I am referring this article:-https://docs.docker.com/get-started/part3/#docker-composeyml.I know that docker host is the one, which runs one or more containers and in which docker engine is installed. Moreover, docker host can either be a physical machine or it can be a virtual machine.But I am confused about docker node. How it's different from docker host ? When a docker host becomes docker node ?Thanks for your patience. | What is the difference between docker host and node? |
The short answer appears to be no. It does not directly support compose.I got around this by using thescript{}blocks in the Jenkinsfile to manually calldocker-compose upwhich worked fine. | I read in certain docker plugins, such asdocker-slave-plugin, it shows there is support for compose but I do not understand how to implement it.Has anyone used docker-compose in the Jenkins pipeline and how? | Does Jenkins support docker-compose |
You have to map the port to the container.
Example for port 443:docker run -d -p 443:443 $imagename$You also have to make sure that your windows firewall is not blocking that port. Maybe you have to create a new rule.BR
Hannes | I have installed Docker for Windows on which I have running Nexus Repository Manager container. Now I want to make my nexus container be accessible from other pc's located in internal network.How to do it? | Docker for Windows - access container in local network |
Containers share kernel with the host system. Thats why you see ubuntu in the output which is your host system kernel. These containers only have 32bit packages installed and they will work fine with your 64bit kernel. | I was trying to run 32 bit Centos in container:sudo docker run -it i386/centos:6Inside container I run commanduname-ain order to know it is 32 bit. Got output:4.10.0-28-generic #32~16.04.2-Ubuntu SMP Thu Jul 20 10:19:48 UTC 2017 x86_64 x86_64 x86_64 GNU/LinuxAccording to my understanding it is 64 bit version and not expected 32 bit one?What I do wrong while getting 32 bit Centos? | Getting 32 bit Centos docker image |
Change ENTRYPOINT to next:ENTRYPOINT ["bash", "run.sh"]It works for me. Read more about entrypoint args herehttps://docs.docker.com/engine/reference/builder/#entrypoint | I'm trying to obtain the value of the first argument I pass to the Docker Entrypoint. I received an answer earlier on how to do this. Here is the link:Referencing a dynamic argument in the Docker EntrypointSo I setup an experiment to see if this works:Here's my Dockerfile:FROM alpine:3.3
MAINTAINER[email protected]RUN apk add --update --no-cache --no-progress bash
COPY run.sh .
ENTRYPOINT /run.shAnd therun.shentrypoint:#!/bin/sh
echo The first argument is: $1I then build this:docker build -t test .And run the image:ole@MKI:~/docker-test$ docker run test one
The first argument is:I was expecting:ole@MKI:~/docker-test$ docker run test one
The first argument is: oneThoughts?TIA,
Ole | Referencing the first argument passed to the docker entrypoint? |
I was facing similar kind of issue on ubuntu 18 instance.
Following are the steps which worked for me:mkdir -p /etc/systemd/system/docker.service.d/touch /etc/systemd/system/docker.service.d/aws-credentials.confvi /etc/systemd/system/docker.service.d/aws-credentials.conf
content of file as follows:
[Service]
Environment="AWS_ACCESS_KEY_ID="
Environment="AWS_SECRET_ACCESS_KEY="sudo systemctl daemon-reloadsudo service docker restart | ProblemI'm trying to send docker logs to Aws Cloudwatch using myon premise serverbut it keeps failing on authentication. I've spent tons of hours searching through documentation and tutorials - and yet it does not work.ApproachI've installed AWS-cli and configured it, so ~/.aws/config is filled with my credentials. I've also set temporarily the session-variables just to be safe:EXPORT AWS_SECRET_ACCESS_KEY=...
EXPORT AWS_ACCESS_KEY_ID=...
EXPORT AWS_SESSION_TOKEN=...I've verified I can connect to AWS using:aws s3 lsThis is my run config:docker run --log-driver=awslogs --log-opt awslogs-group=docker-logs --log-opt awslogs-region=eu-west-1 --log-opt awslogs-create-group=true alpine echo 'hi cloudwatch'When tailing /var/log/daemon.log I see the following error:Feb 1 01:12:07 XXXXXX dockerd[7389]: time="2021-02-01T01:12:07.670370559+01:00" level=error msg="Failed to create log stream" errorCode=NoCredentialProviders logGroupName=docker-logs logStreamName=61c82801d22d3db4c68cdc5b3d1dcba51f97c77dea5ce33e262b712c0e2a23a7 message="no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" origError=""I've also tried giving docker run the session-variables as --ENV-parameters. Still doesn't work.
I've also tried insert the EXPORT'ed variables into /etc/default/docker. Still no luck.Docker has no problem uploading to cloudwatch if I run with log-driver="json-file" and point to a file. So it's only awslogs giving problems.Versions$ aws --version
aws-cli/2.1.22 Python/3.7.3 Linux/4.19.57-v7l+ source/armv7l.raspbian.10 prompt/off$ docker --version
Docker version 20.10.2, build 2291f61 | Docker awslogs gives error NoCredentialProviders |
SOLUTION:iptables -F DOCKER-USER
iptables -I DOCKER-USER -j RETURN
iptables -I DOCKER-USER -p tcp -m multiport --dports http,https -j DROP
for i in `curl -s https://www.cloudflare.com/ips-v4`;\
do iptables -I DOCKER-USER -p tcp -i eth0 -m multiport --dports http,https -s $i -j RETURN;\
done
iptables -I DOCKER-USER -o eth0 -d 0.0.0.0/0 -j ACCEPTResult ofiptables -Lfor DOCKER-USER :ACCEPT all -- anywhere anywhere
RETURN tcp -- anywhere multiport dports http,https
DROP tcp -- anywhere anywhere multiport dports http,https
RETURN all -- anywhere anywhereExplanation:First part (ACCEPT) ACCEPTs outgoing traffic from web server (docker container).Second part (RETURN) describes allowed ip adrresses to connect on port 80 or 443.Third part (DROP) drop packets of connections on port 80/443, which are NOT listed inRETURNpart.Fourth part (RETURN) is default rule in DOCKER-USER chain. It makes possible to handle connections on other ports by the next rules instead of dropping all connections on non 80/443 port (e.g. port 22 - ssh).This will also drop any packet of docker container running on port 80/tcp but the port of container is not mapped to host. Creating issue similar todocker, iptables and cloudflare | I have webserver in docker container, but I cannot configure iptables on my host (Debian). I want allowonlyspecified ip addressess to connect on ports 80 and 443 to my machine (host). Port 22 should be accesible from any ip. In my case, allowed should be Cloudflare ip addresses. Cloudflare ips are available athttps://www.cloudflare.com/ips-v4.How I should correctly block non Cloudflare ips connections on ports 80 and 443? | Docker container accessible only via Cloudflare CDN (selected ip ranges) |
Solved.Found a solution here that worked for me. This 'locks' the user that owns the files in the container towww-datawhilst preserving the original user,andrewon the host. | My issue is that I don't know (nor understand) how to best configure file ownership between a host and a container. I'm a front-end dev by trade so out of my depth here.Host: Windows 10 running WSL2 (Ubuntu 20.04 LTS). Using the VS Code WSL Remote extension.Container:php:7.4-fpmrunning WordPress.WordPress is running just fine but when I want to install a plugin via the CMS or upload a file to the Media library I'm met with "The uploaded file could not be moved to wp-content/uploads/2021/01.".I think this is because the container has1000:1000set as file owner/group but the host machine lists the same files asandrew:andrew. If I change the container towww-data:www-datathen WordPress uploads work but I then cannot use VS Code to edit files - the host files also change towww-data:www-data(not a valid user on the host) - I'm met with the following from VS Code:"Failed to save 'front-page.php': Unable to write file 'vscode-remote://wsl+ubuntu-20.04/home/andrew/my-app/app/wp-content/themes/my-theme/theme/front-page.php' (NoPermissions (FileSystemError): Error: EACCES: permission denied, open '/home/andrew/my-app/app/wp-content/themes/my-theme/theme/front-page.php')"For what it's worth I believe my directory permissions are set-up correctly all the way down to/uploadswithdrwxr-xr-x.Is there a specific way I need to configure file ownership to ensure I can both use WordPress uploads feature and also make file amends in VS Code?Thanks! | How do I set-up file ownership between WSL + VS Code and a Docker container? |
Thedockerdaemon sources/var/lib/boot2docker/profilebefore starting. TheHTTP_PROXYvariable will be available in thedockerdaemons environment. Users logging in viasshwillnotsee this variable.Any/etc/profile.d/*.shfiles will be loaded into a users profile at login but as you pointed out, this is reset back to the base image after every reboot.The/var/lib/boot2docker/directory contains the files that are persisted over reboots.Thebootlocal.shwill be run at the end of startup.bootsync.shfile will be run before docker.Edit/var/lib/boot2docker/bootsync.shto includeecho 'export HTTP_PROXY="http://whatever"' > /etc/profile.d/proxy.shThen the variable will be available for anything that logs in afterdockerhas started for the first time.○ → docker-machine restart default-docker
...
○ → docker-machine ssh default-docker
...
docker@default-docker:~$ echo $HTTP_PROXY
http://whatever | I have tried to put my environment variable at /var/lib/boot2docker/profile file at guest machine, and restart itexport http_proxy=http://proxy:portthen i open shell from my host machine (Windows 7) by usingdocker-machine ssh defaultI can't find 'http_proxy' from my environment variable by usingenv | how to permanently set environment variable for boot2docker |
If you have to run docker on a virtual machine then I think it's only listening to port 8080 on that VM (which you could check with wget or curl on the VM IP address which you should be able to find using the docker desktop, or you could use the VM console and try wget or curl on http://localhost:8080)You may need to use-p 8080:8080instead of--network hostto expose the port on your local machine. | I am having issues with running Adminer on my localhost.
After running this command:$ docker run --rm -ti --network host adminer
[Sun Jan 10 18:19:33 2021] PHP 7.4.14 Development Server (http://[::]:8080) startedI expect to see Adminer running on localhost:8080, however my browser "can't establish a connection to the server at localhost:8080"Not sure how to proceed from here. My terminal states that the server is running on 8080Thank you! | How to get Adminer to run locally using Docker |
The variable is being passed to your container, but supervisor doesn't let use environment variables like this inside the configuration files.You should review thesupervisor documentation, and specifically the parts about string expressions. For example, for thecommandoption:Note that the value ofcommandmay include Python string expressions, e.g./path/to/programname --port=80%(process_num)02dmight expand to/path/to/programname --port=8000at runtime.String expressions are evaluated against a dictionary containing the keysgroup_name,host_node_name,process_num,program_name, here (the directory of the supervisord config file), and all supervisord’s environment variables prefixed withENV_. | I've been trying to pass in an environment variable to a Docker container via the-eoption. The variable is meant to be used in a supervisor script within the container. Unfortunately, the variable does not get resolved (i.e. they stay for instance$INSTANCENAME). I tried${var}and"${var}", but this didn't help either. Is there anything I can do or is this just not possible?The docker run command:sudo docker run -d -e "INSTANCENAME=instance-1" -e "FOO=2" -v /var/app/tmp:/var/app/tmp -t myrepos/app:tagand the supervisor file:[program:app]
command=python test.py --param1=$FOO
stderr_logfile=/var/app/log/$INSTANCENAME.log
directory=/var/app
autostart=true | Using docker environment -e variable in supervisor |
Since the version1.15.1Testcontainers allow to automatically append prefixes to all docker images. In case your private registry is configured as a docker hub mirror this functionality should help with the mentioned issue.Quote from thedocumentation:You can then configure Testcontainers to apply the prefix registry.mycompany.com/mirror/ to every image that it tries to pull from Docker Hub. This can be done in one of two ways:Setting environment variables TESTCONTAINERS_HUB_IMAGE_NAME_PREFIX=registry.mycompany.com/mirror/Via config file, setting hub.image.name.prefix in either:the ~/.testcontainers.properties file in your user home directory, ora file named testcontainers.properties on the classpathBasically set the same prefix you did for the images in your docker-compose file.If you're stuck with older versions for some reason, a deprecated solution would be to override just theryuk.container.imageproperty. Read about ithere. | I'm trying to use TestContainers to run JUnit tests.
However, I'm getting aInternalServerErrorException: Status 500: {"message":"Get https://registry-1.docker.io/v2/: Forbidden"}error.Please note, that I am on a secure network.I can replicate this by doingdocker pull testcontainers/ryukon the command line.$ docker pull testcontainers/ryuk
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: ForbiddenHowever, I need it to pull from our nexus service:https://nexus.company.com/18443.
Inside the docker-compose file, I'm already using the correct nexus image path. (Verified by manually starting it with docker-compose. However TestContainers also pulls in additional images which are outside the docker-compose file. It is these images that are causing the failure.I'd be glad for either a Docker Desktop or TestContainers configuration change that would fix this for me.Note: I've already tried adding the host URL for nexus to the Docker Engine JSON configuration on the dashboard, with no change to the resulting error when doingdocker pull. | How to configure docker/docker-compose to use Nexus by default instead of docker.io? |
I got a response from AWS support on the topic.The links are indeed one-way, which is an unfortunate limitation. They recommended taking one of two approaches:Use a shared filesystem and write the IP addresses of the containers to a file, which could then be used by your application to access the containers.Use AWS Faragate service and use ECS Service Discovery service which lets you automatically create DNS records for the tasks and make them discoverable within your VPC.I opted for a 3rd approach, which was to have the container that can discover the rest send out pings to inform the others of its docker-network IP address. | I'm getting asymmetrical container discoverability with multicontainer docker on AWS. Namely, the first container can find the second, but the second cannot find the first.I have a multicontainer docker deployment on AWS Elastic Beanstalk. Both containers are running Node servers using identical initial code, and are built with identical Dockerfiles. Everything is up to date.Anonymized version of my Dockerrun.aws.json file:{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "firstContainer",
"image": "firstContainerImage",
"essential": true,
"memoryReservation":196,
"links":[
"secondContainer",
"redis"
],
"portMappings":[
{
"hostPort":80,
"containerPort":8080
}
]
},
{
"name": "secondContainer",
"image": "secondContainerImage",
"essential": true,
"memoryReservation":196,
"environment":
"links":[
"redis"
]
},
{
"name": "redis",
"image": "redis:4.0-alpine",
"essential": true,
"memoryReservation":128
}
]
}ThefirstContainerproxies a subset of requests tosecondContaineron port 8080, via the addresshttp://secondContainer:8080, which works completely fine. However, if I try to send a request the other way, fromsecondContainertohttp://firstContainer:8080, I get a "Bad Address" error of one sort or another. This is true both from within the servers running on these containers, and directly from the containers themselves usingwget. It's also true when trying different exposed ports.If I add"firstContainer"to the"links"field of the second container's Dockerrun file, I get an error.My local setup, using docker-compose, does not have this problem at all.Anyone know what the cause of this is? How can I get symmetrical discoverability on an AWS multicontainer deployment? | Multicontainer docker (AWS) link is one-way? |
Protractorheadless testing on real Google Chrome browser is now possible since Chrome >= 57, Chromedriver >= 2.29 along some basic config:capabilities: {
browserName: 'chrome',
chromeOptions: {
args: ['headless', 'window-size=1920,1080']
}
}Another cool thing is that the window size is not limited to the current display. It is truly headless, meaning it can be as large as needed for the tests.Some webdriver features won't work there. For instance:browser.manage().window().setPosition();
browser.manage().window().setSize();
browser.manage().window().maximize();You will have to identify and remove the unsupported features, other than that Google Chrome headless is working great for me.It's important to note that for examplesendKeysmight trigger this error:Failed: unknown error: an X display is required for keycode conversions, consider using XvfbIf there was no real display or there was no Xvfbuntil this was fixedon the Chrome side. TheX display requirederror was fixed with ChromeDriver2.31. | I am trying to run my tests headless and shard both my test suites to run them in parallel. On my local machine they run in parallel, but in this headless setup they run one after the other. I am using Docker images for the web driver and protractor.I am using the webnicer-protractor Docker image:https://hub.docker.com/r/webnicer/protractor-headless/and am using elgalu/selenium for the web driver.Myconf.jsfile that I run looks like this:exports.config = {
//Headless
//seleniumAddress: 'http://localhost:4444/wd/hub',
seleniumAddress: 'http://localhost:24444/wd/hub',
capabilities: {
browserName: 'chrome',
shardTestFiles: true,
maxInstances: 2
},
specs: ['Suites/AccountSettingsSuite.js', 'Suites/CloneDashboardSuite.js']
} | Headless protractor not sharding tests |
You can very well provision a multinode hadoop cluster with docker.Please look at some posts below which will give you some insights on doing it:http://blog.sequenceiq.com/blog/2014/06/19/multinode-hadoop-cluster-on-docker/Run a hadoop cluster on docker containers | I've seen searching for a way to start docker on multiple physical machines and connect them to a hadoop cluster, so far I only found ways to start a cluster locally on 1 machine. Is there a way to do this? | Is it possible to start multi physical node hadoop clustster using docker? |
No, it does not. AFROMdirective will use whatever happens to be available in your local image cache, unless you pass--pulltodocker build. | If I have a Dockerfile:FROM ubuntu/latestand ubuntu update their image in the public registry. When I rundocker build ., will it use the ubuntu that it got the first time or will it pull the new version? | does the FROM directive in a dockerfile allways pull the latest version of an image |
I have read that the disk space is 10GB by default, supposedly this limit is dropped with overlay2. This does not seem to be the case for me.That is not accurate.Earlier releases of Docker used thedevicemapperstorage driver on CentOS, which creates a new virtual block device for each container. In this case, the default per-container size was 10GB, and could be controlled by thedm.basesizestorage option.Thanks to kernel updates and additional development work, the default storage driver on CentOS and most other distributions is theoverlay2storage driver. This no longer relies on a per-container block device, and instead makes use ofoverlayfs. One of the practical impacts of this is that there is no longer a per-container storage limit: all containers have access to all the space under/var/lib/docker. There is no longer any sort of per-container 10GB limit.See the documentation for more information aboutthe overlay2 storage driver.If you are running out of space in/var/lib/docker, you can add space as you would for any other filesystem. | I want to increase the disk space of a Docker container. Here is the output from docker info.Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 4
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: trueI have read that the disk space is 10GB by default, supposedly this limit is dropped with overlay2. This does not seem to be the case for me.docker run -d --name jd2 --restart always -v $HOME/docker/volumes/jd2:/opt/JDownloader/cfg -v $HOME/downloads:/opt/JDownloader/Downloads plusminus/jdownloader2-headless | Increase Docker container storage size on CentOS |
I tried your steps and was able to run tomcat just fine. I didn't get the problem with apt-get, so nowapt-get update --fix-missingwas required. I even started tomcat from the init.d script and it worked.My guess is, that either you had some network problems, or there were some problems with Debian's repositories, but they got fixed.In any case you should note, that the container is running as long as the specified command is running. That means, that you should either run tomcat in the foreground or ensure the same thing in another way. You can checkthis answerfor some options.[EDIT]I've created aDockerfileto test this. Here it is:FROM google/debian:wheezy
RUN apt-get update
RUN apt-get install -y openjdk-7-jre tomcat7
ADD run.sh /root/run.sh
RUN chmod +x /root/run.sh
EXPOSE 8080
CMD ["/root/run.sh"]And here is therun.shscript that it uses:#!/bin/bash
/etc/init.d/tomcat7 start
# The container will run as long as the script is running, that's why
# we need something long-lived here
exec tail -f /var/log/tomcat7/catalina.outHere is a sample build and run session:$ docker build -t tomcat7-test .
$ docker run -d -p 8080:8080 tomcat7-testNow you should be able to see tomcat's "It works !" page onhttp://localhost:8080/ | I'm trying to build a docker image for the first time using a debian image from Google (google/debian:wheezy), setting up OpenJDK7 on it and trying to setup Tomcat7.docker pull google/debian:wheezy
docker run -i -t google/debian:wheezy bashOnce I'm in bash, I install openjdk withapt-get update
apt-get install openjdk-7-jreAfter a while, I get an error and I must runapt-get update --fix-missing
apt-get install openjdk-7-jre
apt-get install tomcat7After Tomcat7 is installed, I try to start it with/etc/init.d/tomcat7 startWhich gives me the following error:[FAIL] Starting Tomcat servlet engine: tomcat7 failed!I'm obviously doing something wrong, I'm getting the exact same behaviour on both my Debian Docker installation and my OSX Docker installation (at least it's consistent, that's kinda impressive!)Looking in /var/log/catalina.out doesn't show any errors, neither does the localhost logs.I've followed the same process with a normal debian:wheezy image and getting exactly the same failure without any errors.
Any idea where I'm screwing up? | Tomcat7 in debian:wheezy Docker instance fails to start |
Alpine is built using theMUSLC library. You cannot run binaries that have been compiled for glibc in this environment. You would need to find agobinary built explicitly for the Alpine platform (e.g. by runningapk add go). | I'm trying to write a dockerfile that uses alpine and takes advantage of a precompiled golang.docker run -it alpine:latestwget https://dl.google.com/go/go1.12.9.linux-amd64.tar.gz --no-check-certificate
tar -C /usr/local/ -xzf go1*.tar.gzI'm getting /bin/sh/: ./go: not foundcd /usr/local/go/bin/
./goIt works fine on my ubuntu laptop so I'm unsure what the difference is here. I did a quick google and I could not find anything clear that points to something missing. | precompiled golang on alpine |
Change--zk_hosts zk://:2181/mesosin your Chronos command-line to--zk_hosts :2181, since this is supposed to be a list of zk node:port pairs, so that Chronos can store its own state in a/chronosznode (as opposed to the/mesosznode, where Mesos stores its leading master info). | I have set up Mesos Cluster including Marathon & Chronos using Docker image for each service.Docker images I am using are as follows;ZooKeeper:jplock/zookeeper:3.4.5Mesos Master:redjack/mesos-master:0.21.0Mesos Slave:redjack/mesos-slave:0.21.0Marathon:mesosphere/marathon:v0.8.2-RC3Chronos:tomaskral/chronos:2.3.0-mesos0.21.0ZooKeeper is running on port 2181, Mesos Master on 5050, Mesos Slave on 5051, marathon on 8088, and Chronos on 8080.What I want to do is; Run Docker container on Marathon & Chronos.Marathon successfully runs Docker containers as its Apps.ButChronos doesn't runs any Jobs.Even if the Job is not with Docker.Config for Chronos Job I tried to launch is;{
"schedule": "R/2015-05-28T10:16:30Z/PT2M",
"name": "simplejob",
"cpus": "0.5",
"mem": "512",
"command": "while sleep 10; do date -u %T; done"
}Jobs are registered on Chronos but never be launched.My command for running Chronos container is as follows;docker run -p 8080:8080 -e LIBPROCESS_PORT=5050 tomaskral/chronos:2.3.0-mesos0.21.0 --http_port 8080 --master zk://:2181/mesos --zk_hosts zk://:2181/mesos | Chronos does not run job |
AWS CodeBuild does not support a Windows build environment, but it is in the works. You cansign up herefor notifications about CodeBuild support for Windows.However, CodeBuild runs all builds on Docker. Building Docker images in a Windows Docker container is not yet supported by Microsoft (seethis GitHub issue for details). | I'm getting started with the CI/CD functionality of AWS. To this point, I have been creating my docker image locally on Windows Server 2016, based on the microsoft/windowsservercore image, and manually pushing it to the ECR (amazon container registry).At this point, I'm not trying to compile the application in CodeBuild. I'm only trying to build the container. Locally, the binaries are in a sub-directory and copied into the container.The CodeBuild project is failing with an error:
image operating system "windows" cannot be used on this platformI'm pretty sure that's because the build environment is linux based.Does anyone know if it's possible to create a Custom Build Environment for AWS that would support building a Windows container image? | Building Windows Containers with AWS CodeBuild |
Exactly as @Marc point, your traffic got out with EXTERNAL-IP of your worker nodes, not your load balancer.To find nodes EXTERNAL-IP IPs use:kubectl get nodes -owideTo be more precise and output only IPs use (taken fromkubectl Cheat Sheet):kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'Next whitelist those IPs and you should be good, but keep in mind that after Kubernetes upgrade or cluster scaling the IPs might change, so I recommend usingCloud NATto always have the same IP for your outgoing traffic. | I have an issue with trying to deploy my containerized app to GKE. It is not able to reach my MongoDB Atlas cluster. Running the Docker container locally creates no issues and works perfectly. I am by no means an expert in Docker or Kubernetes, but I am assuming it is something to do with the DNS name resolution.I have followed this tutorial -Deploying a containerized web application, with an addition of adding an EXTERNAL-IP of the LoadBalancer to my 'Network Access' IP Whitelist in the MongoDB Atlas console and using port mapping 443 -> 8443 since I am using HTTPS.Only logs that my app is able to produce before failing:(mongodb): 2020/05/30 15:07:39 logger.go:96: 2020-05-30T15:07:39Z
[error] Failed to connect to mongodb. Check if mongo is running...
(mongodb): 2020/05/30 15:07:39 logger.go:132: 2020-05-30T15:07:39Z
[fatal] server selection error: server selection timeout, current
topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: biomas-
cluster-shard-.azure.mongodb.net:27017, Type: Unknown,
State: Connected, Average RTT: 0, Last error: connection() :
connection(biomas-cluster-shard-.azure.mongodb.net:27017[-180]) incomplete read of message
header: EOF }, { Addr: biomas-cluster-shard-.azure.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(biomas-cluster-shard-.azure.mongodb.net:27017[-181]) incomplete read of message header: EOF }, { Addr: biomas-cluster-shard-.azure.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(biomas-cluster-shard-.azure.mongodb.net:27017[-179]) incomplete read of message header: EOF }, ] }If there is a simple workaround with to this, that would be preferred since the app is in the development stage still, so I just need a basically working application using the said technologies.The full workflow:Android App -> Golang API running on Docker -> MongoDB AtlasThanks | GKE not able to reach MongoDB Atlas |
The suggestion towardshostnamereceived in one of the comments was on point. The following piece of code now serves as intended:file=/mnt/pgdata/hostname
if [ -n \"$INITDB\" ] && [ \"$(cat $file)\" != $(hostname) ]; then
initdb ...
echo $(hostname) > $file
fi | I would like to add a configuration option to a proprietary, PostgreSQL-based Docker image for OpenShift 3.9 in the form of a template variableINITDB. The image provides a database that is backed by persistent storage, and from now on the database should only be initialized when that variable (flag) is set.The image is built with OpenShift's Docker build strategy and
PostgreSQL'sinitdbis called in theDockerfile'sENTRYPOINTscript: so it executes whenever the container starts up. However I want to have a set flag only to have an effect when a flagged container starts up for the first time. Otherwise what could happen is that the database becomes initialized when the container starts up first (as should be the case) but also becomes re-initialized when the container is restarted e.g. because of migration to another node (this is unwanted).So I presumably need some logic whereby the script stores the container's image id in a file also in persistent storage with logic such that it callsinitdbonly if the flag is setandthe file does not exist or contains another image id.
So perhaps something roughly along those lines:file=/mnt/pgdata/image_id
if [ -n "$INITDB" ] && [ $(cat $file) != $image_id]; then
initdb ...
echo $image_id > $file
fiSo my question is this: how can a running container learn its image's id? Is there a ready environment variable (e.g.OPENSHIFT_...-- so far I have found none) or would it have to go through an API? The second choice seems feasible becauseoc describe podslists "Image ID" (and because ofoc explain pod.spec.containers.image). But is it necessary/advisable and if so, would one have to provide explicit credentials or do containers own appropriate credentials by default?I'd also be interested in finding out how OpenShift's own/"official" PostgreSQL image provides such functionality, but have not found the right source code yet. | How can OpenShift container learn its image ID? |
Have you tried usingdocker cp? That allows you to move files from the Docker filesystem to your host, even if the container is stopped (as long as it hasn't been removed). The syntax would look like the following:docker cp :/path/to/file/in/container /path/to/file/in/host | I'm completely new to Docker. I'm using it to train neural networks.I've got a running container, executing a script for training a NN, and saving its weights in container's writable layer. Recently I've realized that this setup is incorrect (I haven't properly RTFM), and the NN weights will be lost after the training finishes.I've read answers and recipes about volumes and persistent data storage. All of them express one idea: you must prepare that data storage in advance.My container isalready running. I understand that incorrect setup is my fault. Anyway, I do not want to lose results that will be obtained during this execution (that is now in progress). Is it possible?One solution that have come to my mind is to open one more terminal and runwatch -n 1000 docker commit tag:labelThat is, commit a snapshot every 1000 seconds. However, weights, obtained on the last epoch are still in danger, since epoch durations differ and are not multiple of 1000.Are there any more elegant solutions?Additional informationImage for this container was created using the following Dockerfile:FROM tensorflow-py3-gpu-keras
WORKDIR /root
COPY model4.py /root
COPY data_generator.py /root
COPY hyper_parameters.py /root
CMD python model4.pyI have manually created imagetensorflow-py3-gpu-kerasfrom the latest tensorflow image, pulled from the DockerHub:docker run tensorflowInside the container:pip3 install kerasAnddocker commitin another terminal. | Save artifacts from already running docker container |
Notice the errorls: cannot access '.'$'\r': No such file or directory.One of the issues with Docker (or any Linux/macOS based system) on Windows is the difference in how line endings are handled.Windows ends lines in a carriage return and a linefeed\r\nwhile Linux and macOS only use a linefeed\n. This becomes a problem when you try to create a file in Windows and run it on a Linux/macOS system, because those systems treat the\ras a piece of text rather than a newline.Make sure to rundos2unixon script file whenever anyone edit anything on any kind of editor on Windows. Even if script file is being created on Git Bash don’t forget to rundos2unixdos2unix import.shSeehttps://willi.am/blog/2016/08/11/docker-for-windows-dealing-with-windows-line-endings/In your case:FROM mongo:latest
RUN apt-get update && apt-get install -y dos2unix
WORKDIR /tmp
COPY data/shops.json .
COPY import.sh .
RUN dos2unix /import.sh && apt-get --purge remove -y dos2unix
CMD ["/bin/bash", "-c", "source import.sh"] | I intended to install a mongodb docker container fromDocker Hub, and then insert some data into it. Obviously, a mongodb seed container is needed. So I did the following:created aDockerfileof Mongo seed container inmongo_seed/Dockerfileand the code inDockerfileis the following:FROM mongo:latest
WORKDIR /tmp
COPY data/shops.json .
COPY import.sh .
CMD ["/bin/bash", "-c", "source import.sh"]The code ofimport.shis the following:#!/bin/bash
ls .
mongoimport --host mongodb --db data --collection shops --file shops.jsontheshops.jsonfile contains the data to be imported to Mongocreated adocker-compose.ymlfile in thecurrent working directory, and the code is the following:version: '3.4'
services:
mongodb:
image: mongo:latest
ports:
- "27017:27017"
container_name: mongodb
mongodb_seed:
build: mongodb_seed
links:
- mongodbThe code above successfully made themongodbservice execute theimport.shto import the json data -shops.json.It works perfectly in my Ubuntu. However, when I tried to run commanddocker-compose up -d --build mongodb_seedin Windows, the import of data failed with errors logs:Attaching to linux_mongodb_seed_1
mongodb_seed_1 | ls: cannot access '.'$'\r': No such file or directory
: no such file or directory2T08:33:45.552+0000 Failed: open shops.json
mongodb_seed_1 | 2019-04-02T08:33:45.552+0000 imported 0 documentsAnyone has any ideas why it was like that? and how to fix it so that it can work in Windows as well? | How to seed a docker container in Windows |
My standard day to day development work is carried out in Docker For Mac/Windows as they cover about 95% of what I need to do with Docker. Since they replaced Docker Toolbox/boot2docker and made the integration to the OS pretty seamless I have found very few reasons to move over to another virtual machine. The two main reasons I see forusing Vagrantor standalone VM's now are for VM customisation and clustering.VM CustomisationThe virtual machines supplied by Docker Toolbox, Docker for Mac/Windows are pre packaged cut down Linux distros (TinyCoreandAlpine) that are largely ephemeral, except for the Docker configuration so you don't get much say in how they work.NetworkingI deal with a number of custom network configurations that just aren't possible in the pre packaged VM's, largely around having containers connected to routable networks rather than using mapped ports.Version ControlOccasionally you need to replicate server environments that run old versions of the Docker daemon, or RHEL servers using devicemapper. A VM let's you choose the packages to install.ClusteringBuilding a swarm, or branching out intoMesosphere/Kuberneteswill require multiple VM's. I tend to find these easier to manage and build with Vagrant rather than Docker Machine, and again they require custom config inside the VM. | I've read multiple articles how to do this, but I can't figure out what the benefits are under macOS.From my point of view, you can run Docker natively on macOS using Docker Community Edition (boot2docker+Kitematic). What does it's give me for running from Vagrant, mobility? | Why run Docker under Vagrant? |
In your Compose file, you have avolumes:block that overwrites the image's code with content from the host. Delete this.services:
backend:
volumes: # <-- delete
- ./backend:/app # <-- deleteWhen this block is present, the/appdirectory in the container is the./backenddirectory in the host system. Whatever was in the image under/appis hidden, and replaced by that mounted content.More specifically, when your Dockerfile saysRUN chmod +x /app/entrypoint.shthat permission change is hidden by the bind mount. If the file isn't executable on the host system, then it won't be executable when the container runs either, and you'll get an error running it as the container's entrypoint.Mounts like this also hide the image'snode_modulesdirectory. If your host system isn't fully compatible with the container environment (same operating system and C library base) then this can cause problems starting up. There's a common workaround to take advantage of a Docker feature to store thenode_modulestree in an anonymous volume, but this means that the container environment will ignore changes in thepackage.jsonfile, and you can get different library trees depending on when you first ran the container. | I am implementing a application using a NestJS server, working with a PostgreSQL database and the Prisma service to handle data.I have an issue when trying to run a Prisma migration when launching my service in my Dockerfile. Here is my docker-compose.yml file :version: '3.8'
services:
# POSTGRES
postgres:
container_name: postgres
image: postgres:13.5
restart: always
ports:
- 5432:5432
env_file:
- ./backend/.env
volumes:
- postgres:/var/lib/postgresql/data
networks:
- transcendance
# BACKEND
backend:
build:
context: ./backend
dockerfile: Dockerfile
args:
- BUILDKIT_INLINE_CACHE=1
container_name: backend
restart: always
env_file:
- ./backend/.env
ports:
- 3001:3001
- 5555:5555 # Expose a port for Prisma Studio
depends_on:
- postgres
networks:
- transcendance
volumes:
- ./backend:/app
networks:
transcendance:
volumes:
postgres:I am building and running my backend container with this Dockerfile :FROM node:lts
WORKDIR /app
COPY package*.json ./
COPY prisma ./prisma/
COPY entrypoint.sh /app/entrypoint.sh
COPY . .
RUN npm i -g @nestjs/cli
RUN npm install
RUN chmod +x /app/entrypoint.sh
EXPOSE 3001 3002 5555
ENTRYPOINT [ "/app/entrypoint.sh" ]
CMD [ "npm", "run", "start:dev" ]I have set up this entrypoint.sh file as follow in order to make this migration#!/bin/sh
# Apply Prisma migrations and start the application
npx prisma migrate deploy
npx prisma generate
# Run database migrations
npx prisma migrate dev --name init
# Run the main container command
exec "$@"All my containers are well created but only my database is running, others have CREATED status. If i remove my script and its execution, run thedocker compose up --buildand then make the migration directly from my backend container, everything work well.Any help on this ? Thank you ! | Prisma migration in a Docker - NestJS server |
The policy likely containsdontauditrules.Dontauditrules do not allow acecss, but suppress logging for the specific access.You can disabledontauditrules withsemanage:semanage dontaudit offAfter solving the issue, you probably want to turn thedontauditrules back on to reduce log noise.It is also possible to search for possibledontauditrules withsesearch:sesearch --dontaudit -t container_file_t | I have a docker container, when disable selinux, it works well;
but when enabled selinux (i.e. the docker daemon is started with --selinux-enabled), it can not start up.So the failure should caused by selinux denial, but this is not shown in the selinux audit log. when I use the "ausearch -m XXX | audit2allow ..." to generate the policy, it does not include any denial info.want to know how to get the selinux denial info occured inside the container, so that I can use it in generating my policy file?ps: I checked the label info of the accessed file, they seem right,but access(ls) is denied:# ls -dlZ /usr/bin
dr-xr-xr-x. root root system_u:object_r:container_file_t:s0:c380,c857 /usr/bin
# ls /usr/bin
ls: cannot open directory /usr/bin: Permission deniedmore: the selected answer answered the question, but now the problem is the audit log shows the access is to read "unlabeled_t", but as the "ls -dZ /usr/bin" shows, it is a "container_file_t". I put this in a separate question:Why SELinux denies access to container internal files and claims them as "unlabled_t"? | How to audit the selinux denial inside a docker container |
Yes, your login isbheng, buthub.docker.com([SERVER]) is wrong. Correct server isindex.docker.io. Actually, it is default server, so you don't need to specify, so just use the simple command:docker login | When runningdocker login hub.docker.com
Username: bheng
Password: **********I kept gettingError response from daemon: login attempt tohttps://hub.docker.com/v2/failed with status: 404 Not FoundHow do I know what is my username ?I can log in into my docker hub fine with the same credentials.I've tried almost every combinations | Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found |
Our problems here boiled down to a number of causes:Since we referenced the credential helper inDOCKER_AUTH_CONFIG, we needed the helper installed on the machine spawning the runners. (We use thedocker+machinerunner.) This machine also needed IAM permissions. Without this, it just gave up on theDOCKER_AUTH_CONFIGvariable completely (a questionable decision if you ask me...)In order to authenticate from within the jobs and push the images to ECR, we needed to configure the helper there too. We did this by modifying our spawner'sconfig.tomlfile to add a volume/usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login. (We also mounted the log directory and our helper wrapper.) In thedocker pushcommand, we added a--config docker-configflag, and wrote out an appropriate config todocker-config.config.jsonFinally, our job image wasdocker/compose, and our verbose wrapper was written in bash, which isn't included in that image, so that was another silent failure. 😖. | We have a GitLab CI pipeline that currently pulls images from our internal Docker registry, authenticated using a variable defined in.gitlab-ci.yml:variables:
...
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}}'This works fine.We are trying to add a step to the end of the pipeline, to push our built Docker images to an Amazon ECR registry. We have installed the amazon-ecr-credential-helper on our runner instances, and given them the correct IAM permissions to be able to push to these registries. We have changed the.gitlab-ci.ymlvariable to:DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}, "credHelpers": { ".dkr.ecr..amazonaws.com": "ecr-login"}}'However, this causes the runner to fail to authenticate to our internal registry, so it cannot pull the images in which our jobs run. Whereas previously we would see in our pipeline jobs' logs:Authenticating with credentials from $DOCKER_AUTH_CONFIG... we are no longer seeing this. We're not even getting to the step where we want to push to ECR.We have added a wrapper script around the credential helper, to log all the ins and outs to a file, and try and debug what is happening. However, it appears as if the helper isn't getting called at all, as there is nothing in the log file.What can we do to try and get this working? | GitLab runner ignoring DOCKER_AUTH_CONFIG when credential helper specified |
The mapping should be made to$PWD:/home/wiremock/mappingswherePWDhas the json files.Also json files should look like this:{
"mappings": [
{
"id": "679dd3ce-55e5-45ee-b270-01dcf1b371ca",
"request": {
"urlPattern": "^/hello",
"method": "GET"
},
"response": {
"status": 200,
"jsonBody": {
"status": "success",
"message": "Hello"
},
"headers": {
"Content-Type": "text/plain"
}
},
"uuid": "679dd3ce-55e5-45ee-b270-01dcf1b371ca"
},
{
"id": "679dd3ce-55e5-45ee-b270-01dcf1b371c2",
"request": {
"urlPattern": "^/hello-2",
"method": "GET"
},
"response": {
"status": 200,
"jsonBody": {
"status": "success",
"message": "Hello-2"
},
"headers": {
"Content-Type": "text/plain"
}
},
"uuid": "679dd3ce-55e5-45ee-b270-01dcf1b371c2"
}
]
}if you are running it json request object then the mapping should provide themappingsand the__filesso need to map the containing folder:docker run -it --rm \
-p 8090:8080 \
-v $PWD/samples/hello/stubs:/home/wiremock \
wiremock/wiremock | Link to the Repo:https://github.com/wiremock/wiremock-dockerI'm getting an error when I try to access stubs, not sure if I'm missing anything here. Can I know if the below command is correct ?docker run --rm -d -p 8080:8080 -p 8443:8443 --name wiremock_demo \
-v $PWD:/home/wiremock \
rodolpheche/wiremock:2.25.1ERROR: | Docker Container with Wiremock could not find stub mappings |
I found a solution with the update-ca-certificates: copy the root cert file into /usr/local/share/ca-certificates and run update-ca-certificates manpages.ubuntu.com/manpages/xenial/man8/… | I'm an absolute Beginner in Docker and install on my workstation ubuntu 16.04.3 the latest docker version successfully.But when I now try to do following:
docker run hello-world
Unable to find image 'hello-world:latest' locally
Pulling repository docker.io/library/hello-world
docker: Error while pulling image: Gethttps://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate signed by unknown authority.
See 'docker run --help'.I got the issue with the x509 error message.We have in our company a firewall and I have already copied our company root certificate to /etc/docker/certs.d/
We also use a internet proxy for communication to the internet, so i configured the daemon who starts with systemd, I set the environment for the http proxy and https proxy but still get the same x509 error message.Could somebody please help me.thank you$ docker -v
Docker version 1.12.6, build 78d1802 | docker x509 certificate signed by unkown authority |
Update:The flexible environment now supports theUpdated health checks, consisting of separately configurableliveness checksand/orreadiness checks, both with a configurablepathcapability.Note that these updated health checks are not compatible and cannot coexit with the legacy health checks:You must enable updated health checks for each project individually.
By default, legacy health checks are enabled. You cannot use both
types of health checks in the same project.Original answer, applicable only to the legacy health checks:The path is not currently configurable. FromHealth checking:You do not have to do anything special to implement health checking.
If your app does not handle health checks, a HTTP404response is
interpretated as a successful reply.You can write your own custom health-checking code. It should reply to/_ah/healthrequests with a HTTP status code200. The response
must include a message body, however, the value of the body is ignored
(it can be empty).So you can leave it like that - your app being always considered healthy. Or you can teach your app to correctly answer to the health check requests at/_ah/health. | My service uses a url like this:/v1/lookup_stuff/v1/is the base url for everything in the service, so when the health check pingsit gets a 404. I need to update to ping/v1/(Possibly useful information, the service is in Docker, and is accessible when I manually go to the right URL)How do I point gcloud's health service at the correct URL for my service?Ok for clarity:Google url:service-something.appspot.comAll urls for service are under:service-something.appspot.com/v1I need to point health checker to:service-something.appspot.com/v1/_ah/healthinstead ofservice-something.appspot.com/_ah/health | Docker in Google App Engine Flexible health check has wrong URL |
Containers within a compose file will run on same network and you can just their names.phpfpmandnginxin your case. Also if you need more names for the same service you need to use aliasesnginx:
build: ./nginx
ports:
- "80:80"
- "443:443"
volumes:
- ../public/:/var/www/html/public/
container_name: nginx
networks:
backend:
aliases:
- test.local
phpfpm:
build: ./php-fpm
volumes:
- ../public/:/var/www/html/public/
container_name: phpfpm
networks:
- backend | Looking to automatically add the nginx container ip address inside my phpfpm container /etc/hosts file.Inside my yml file, I have a service called phpfpm, and I know you can use extra_hosts attribute to assign values into the /etc/hosts file, however I don't know how to dynamically call place the nginx container IP.nginx:
build: ./nginx
ports:
- "80:80"
- "443:443"
volumes:
- ../public/:/var/www/html/public/
container_name: nginx
networks:
- backend
phpfpm:
build: ./php-fpm
volumes:
- ../public/:/var/www/html/public/
container_name: phpfpm
extra_hosts:
- "test.local:nginx"
networks:
- backendAny thoughts on how to do this? | dynamically adding nginx container ip into phpfpm /etc/hosts file |
You can create your own images that serve you as base images using Dockerfiles.Example:mkdir ROSDocker
cd ROSDocker
vim Dockerfile-base
FROM debian:stretch-slim
RUN apt-get install dep1 dep2 depn
sudo docker build -t yourusername/ros-base:0.1 -f Dockerfile-base .After the build is complete you can create another docker file from this base image.FROM yourusername/ros-base:0.1
RUN apt-get install dep1 dep2 depnNow build the second images:sudo docker build -t yourusername/mymoveApplication:0.1 -f Dockerfile-base .Now you have an image for your move application, each container that you run from this image will have all the dependencies installed.You can have docker image repository for managing your built images and sharing between people/environments.This example can be expanded multiple times. | I want to use docker to help me stay organized with developing and deploying package/systems using ROS (Robot Operating System).I want to have multiple containers/images for various pieces of software, and a single image that has all of the ROS dependencies. How can I have one container use the apt-packaged from my dependency master container?For example, I may have the following containers:MyRosBase:sudo apt-get installall of the ros dependencies I need (There are many). Set up some other Environment variables and various configuration items.MyMoveApplication: Use the dependencies from MyRosBase and install any extra and specific dependencies to this image. Then run software that moves the robot arm.MySimulateApplication: Use the dependencies from MyRosBase and install any extra and specific dependencies to this image. Then run software that simulates the robot arm.How do I use apt packages from container in another container without reinstalling them on each container each time? | How to share apt-package across Docker containers |
Itlooks likethere's no Windows support for it.That would require some sort of support from Windows and is not something that they are working on. | run jenkins with docker on windows, but how to run docker command in windows docker container?
in linux:docker run -it --rm --privileged --name dockerindocker -v //var/run/docker.sock:/var/run/docker.sock dockerAre there any similar commands available for docker in Windows 10? | docker inside docker on windows |
As mentioned in the comment, you should set update theHOSTbut still, it will not work, as the WordPress DB configuration does not seems correct.ENV for DB isMARIADB_ROOT_PASSWORD: password
MARIADB_DATABASE: db_tyre
MARIADB_USER: wordpress
MARIADB_PASSWORD: wordpressso the WordPress DB configuration should be updated and it should bedb_tyredefine( 'DB_NAME', 'db_tyre');
/** MySQL database username */
define( 'DB_USER', 'wordpress');
/** MySQL database password */
define( 'DB_PASSWORD', 'wordpress');
/** MySQL hostname */
define( 'DB_HOST', 'db:3306');or can try with offical imageversion: '3.1'
services:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- wordpress:/var/www/html
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
volumes:
wordpress:
db: | I am trying to setupwordpresswithdocker. I have included my yaml file below. Here I have set my mariadb_database to db_tyre.When I hitdocker-compose up -d, it is creating all the required files of wordpress. This is also creating db_tyre database but when I try localhost:8000, it gives meError establishing a database connection.I have checked the wp-config.php file, it has following lines.define( 'DB_NAME', 'wordpress');
/** MySQL database username */
define( 'DB_USER', 'wordpress');
/** MySQL database password */
define( 'DB_PASSWORD', 'wordpress');
/** MySQL hostname */
define( 'DB_HOST', 'mariadb:3306');yml fileversion: '3'
services:
# Database
db:
image: bitnami/mariadb:latest
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MARIADB_ROOT_PASSWORD: password
MARIADB_DATABASE: db_tyre
MARIADB_USER: wordpress
MARIADB_PASSWORD: wordpress
networks:
- wpsite
# Wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- '8000:80'
restart: always
volumes: ['./:/var/www/html']
environment:
WORDPRESS_DB_HOST: mariadb:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
networks:
- wpsite
# phpmyadmin
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- '8080:80'
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: password
networks:
- wpsite
networks:
wpsite:
volumes:
db_data: | Error establish a database connection docker compose |
You have to specify the environment variables in thedocker-compose.ymlfile like thisenvironment:
- VAR1
- VAR2=fixedvalueIn this caseVAR1assumes the value that is defined for the variable in your computer andVAR2will assume the value that is specified regardless or what is configured in your computer.You also have theenv_fileoption which allows you to specify a file with all the variables set.env_file:
- web-variables.envYou can find more information in thedocumentation. | Is there a way to force docker compose to assume environment variables from the underlying machine?Background:I decided to play around with Docker in my ASP.NET Core Web Application, so I used theAdd Docker Supportoption in Visual Studio, which created a.dcproj(Docker Compose project).Prior to that, I was reading some configs from Environment Variables on the current machine (either my dev machine or a server).I realized when I'm debugging with the docker compose project, I'm not able to get data from Environment Variables anymore, which makes sense, since docker became the new environment (not my machine anymore). I wouldn't like these values to be pushed into my git repo. | How to set Environment Variables from server in Docker Compose? |
Commands are defined like:CMD [ "--port", "8080" ]Where otherwise the--portgets attached to the CMD command itself as a flag, not the actual command it runs.This presumes that theENTRYPOINTcan properly handle just options and doesn't require a path of an executable as is traditionally the case. | Suppose there's an image outside of my control that specifies custom entry point. Let's call itserver# server's Dockerfile
ENTRYPOINT /usr/bin/serverI'm building an image based on the server. I'd like to specify a default command to be executed. It should call the server's entrypoint and pass an argument to it. The argument is a single option. A naive solution would look like this:FROM server:latest
CMD --port 8080That, however, fails duringdocker buildwithError response from daemon: Dockerfile parse error line 3: Unknown flag: portHow can I use CMD to pass arguments to entry point that start with--? | How to pass only options with the CMD command? |
Using letsencrypt to get a free certificate and then either use https or redirect to http was the solution for me.Credits to @RichardSmith | ContextSimple setupA docker container exposing 8090 as a website (node, express)A nginx conf exposing 80 and mapping it to localhost 8090IssueI don't have paid SSL certificate, I want end users that try to reach https to be redirected to http.I have tried redirects, rewrite, and here below a simple listen.. without success. Browser would return 'ERR_SSL_PROTOCOL_ERROR'.server {
listen 80;
listen 443;
ssl off;
server_name xxxx.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://0.0.0.0:8090";
}
}QuestionCould you please advise on the cleanest way to redirect https->http(reverse proxy) ? RegardsEdit 1The following config results in a browser "ERR_SSL_PROTOCOL_ERROR".server {
listen 80;
server_name xxxx.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://0.0.0.0:8090";
}
}
server {
listen 443;
server_name xxxx.com;
rewrite ^(.*) http://$host$1 permanent;
} | Nginx redirect https->http(reverse proxy)->port(docker container) |
First of all you can runphp -minphpcontainer to see installed and enabled modules.You can edit yourdocker-compose.ymllike this:version: '3.1'
services:
php:
# image: php:7.2-apache # remember to comment this line
build: .
ports:
- 8089:80
volumes:
- ./php/www:/var/www/html/Create a file calledDockerfilebesidedocker-compose.ymlwith the following contents:FROM php:7.2-apache
# then add the following `RUN ...` lines in each separate line, like this:
RUN pecl install xdebug && docker-php-ext-enable xdebug
...Finally, let's go one by one:apache2Is installed.php7.2Is enabled.php-xdebugAddDockerfile:RUN pecl install xdebug && docker-php-ext-enable xdebugphp7.2-mcryptAdd toDockerfile:RUN apt-get install libmcrypt-dev
RUN pecl install mcrypt && docker-php-ext-enable mcryptphp-apcuAdd toDockerfile:RUN pecl install apcu && docker-php-ext-enable apcuphp-apcu-bcAdd toDockerfile:RUN pecl install apcu_bc
RUN cp /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini
RUN echo 'extension=apc.so' >> /usr/local/etc/php/php.iniphp7.2-jsonIs installed.php-imagickAdd toDockerfile:RUN apt install -y libmagickwand-dev --no-install-recommends && \
pecl install imagick && docker-php-ext-enable imagickphp-gettextAdd toDockerfile:RUN docker-php-ext-install gettext && \
docker-php-ext-enable gettextphp7.2-mbstringIs enabled. | I want to run a apache webserver with php extension inside container using docker compose as deployment.My compose file looks like this:version: '3.1'
services:
php:
image: php:7.2-apache
ports:
- 8089:80
volumes:
- ./php/www:/var/www/html/how can I enable the following extensions.apache2
php7.2
php-xdebug
php7.2-mcrypt
php-apcu
php-apcu-bc
php7.2-json
php-imagick
php-gettext
php7.2-mbstring | How to enable php extensions when using the image php:7.2-apache with docker-compose? |
Thisissueis fixed in a commit but still not merged seethisyou may try this work around:sudo vim /etc/sysconfig/kubeletadd at the end of DAEMON_ARGS string:--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slicerestart:sudo systemctl restart kubeletor :adding a file in :/etc/systemd/system/kubelet.service.d/11-cgroups.confwhich contains:[Service]
CPUAccounting=true
MemoryAccounting=truethen reload and restartsystemctl daemon-reload && systemctl restart kubelet | Am trying to setupkubernetesincentosmachine, kubelets start is giving me this error.Failed to get kubelets cgroup: cpu and memory cgroup hierarchy not
unified. Cpu:/, memory: /system.slice/kubelet.service.The cgroup driver I mentioned is systemd for both docker and kubernetesDockerversion 1.13.1Kubernetesversion 1.15.2Can any one suggest the solution. | Failed to get kubelets cgroup |
You need to use theshell formof the CMD statement. With theexec formof the statement, as you have now, there's no shell to replace environment variable.UseCMD "${LAMBDA_HANDLER}"instead.This is equivalent to this, using theexec form, which you can also use, if you prefer the exec formCMD [ "/bin/sh", "-c", "${LAMBDA_HANDLER}" ] | I have a Dockerfile and I am taking in a LAMBDA_NAME from a jenkins pipeline.I am passing in something like this: source-producerAnd I want to call the handler of this function, which is named handler in the code.This code does not workARG LAMBDA_NAME
ENV LAMBDA_HANDLER="${LAMBDA_NAME}.handler"
RUN echo "${LAMBDA_HANDLER}"
CMD [ "${LAMBDA_HANDLER}" ]The result of the run echo step gives back "sourceproducer.handler", which is correct.The code above produces this error
[ERROR] Runtime.MalformedHandlerName: Bad handler '${LAMBDA_HANDLER}': not enough values to unpack (expected 2, got 1)But, when this same value is hardcoded, it works fine and executes the lambda function.ARG LAMBDA_NAME
ENV LAMBDA_HANDLER="${LAMBDA_NAME}.handler"
RUN echo "${LAMBDA_HANDLER}"
CMD [ "sourceproducer.handler" ]How can I correctly use LAMBDA_HANDLER inside of CMD so that the top code block executes the same as the one below it? | Expand ARG/ENV in CMD dockerfile |
You should be good if you add privileged mode and make sure you're in host networking mode. This worked for me:>$ docker run --net host --privileged -v /proc/net/arp:/host/arp alpine cat /host/arp | How can I access the host ARP records from within a Docker container?I tried to mount a volume (in a docker-compose file)/proc/net/arp:/proc/net/arpbut found out that I can't make any volume with/proc. Then I tried to mount it elsewhere like/proc/net/arp:/root/arp, but then if Icat /root/arp, from within the container, the table comes out empty.docker run -v /proc/net/arp:/root/arp alpine cat /root/arp <-- returns empty tableIdeas? | Acessing ARP table of Host from Docker container |
Unsing the following configuration withhost.docker.internal:8081it works."DocumentDb": {
"TenantKey": "Default",
"Endpoint": "https://host.docker.internal:8081",
"AuthorizationKey": "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw=="
},So using the container name like DNS direction not works. I also have to do in Develpment environment because in other diferenthost.docker.internalnot works...version: '3.1'
services:
local-Proyect:
image: project-cosmos-image
container_name: project-cosmos-container
ports:
- 127.0.0.1:7000:433
- 127.0.0.1:7001:80
environment:
ASPNETCORE_ENVIRONMENT: Development
ASPNETCORE_URLS: http://+:433;http://+:80 | I lost amount of time trying to connect my app container with my database Azure Cosmos DB Emulator. I am using loggers object to know where my app break, and I found that the problem is in the connection of the container out of him. I tried to use the famous host.docker.internal direction to connect my host but using my container name (the public IP and DNS internal server of docker).Here is my appsettings.Development configuration:"DocumentDb": {
"TenantKey": "Default",
"Endpoint": "https://project-cosmos-container:8081",
"AuthorizationKey": "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw=="
},Here my Dokerfile-Cosmos (here I copy my app .ddl that I created before with dotnet build and dotnet publish):FROM microsoft/dotnet:2.2-aspnetcore-runtime
# We create the folder inside the container
WORKDIR /local-project
# We are coping all project executables that we created with dotnet build and dotnet publish
COPY ./bin/Release/netcoreapp2.2/publish/* ./
EXPOSE 8000
EXPOSE 8081
# We indicate to execute the program in the executable of the project
ENTRYPOINT ["dotnet", "Local.Proyect.Core.dll"]And finnally my docker-compose where I run the app:version: '3.1'
services:
local-Proyect:
image: project-cosmos-image
container_name: project-cosmos-container
ports:
- 127.0.0.1:7000:8000
environment:
ASPNETCORE_ENVIRONMENT: Development
ASPNETCORE_URLS: http://+:8000Maybe the problem is in the ports, I don't Know. You can see that I am trying use my port 7000 on my computer host to connect the container and the port 8081 (azure cosmos port) | I can't connect my ASP .NET app from Docker-container to my computer host database with "host.docker.internal:some-port" |
The option--ssl-cacert-fileis only for host verification not for authentication.I have found this example on how to add pem files inside an scp command:scp -i /path/to/your/.pemkey -r /copy/from/path user@server:/copy/to/pathThe parameter-i /path/to/your/.pemkeycan be passed in blacklabelops/volumerize
with the env variable `VOLUMERIZE_DUPLICITY_OPTIONS``Example:$ docker run -d \
--name volumerize \
-v jenkins_volume:/source:ro \
-v backup_volume:/backup \
-e "VOLUMERIZE_SOURCE=/source" \
-e "VOLUMERIZE_TARGET=scp:///backup" \
-e 'VOLUMERIZE_DUPLICITY_OPTIONS=--ssh-options "-i /path/to/your/.pemkey"' \
blacklabelops/volumerize | I have a couple of docker volumes i want to backup onto another server, using scp/sftp. I don't know how to deal with that so i decided to have a look atblacklabelops/volumerize GitHub project.This tool is based on the command line toolDuplicity. Dockerized and Parameterized for easier use and configuration.Tutorialis dealing with a jenkins docker, but i don't understand how to mention i'm want to use a pem file.I've tried different solution (adding -i option to scp command line) without any success at the moment.Duplicity man pageis mentioning the use of cacert pem files (--ssl-cacert-file option), but i suppose i have to create an env variable when running the docker (with -e option), and i don't know which name to use.Here what i have so far, can someone please point me in the right direction ?docker run -d --name volumerize -v jenkins_volume:/source:ro -v backup_volume:/backup -e "VOLUMERIZE_SOURCE=/source" -e "VOLUMERIZE_TARGET=scp://me@serverip/home/backup" blacklabelops/volumerize | Using Volumerize to backup my docker volumes with scp ? |
You should not add variables to the webserver, but to scheduler. If you are using LocalExecutor, the tasks are run in the context of Scheduler.Actually what tou should really do is to set all env variables to be the same for all the containers (this is explained herehttps://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html)Use the same configuration across all the Airflow components. While each component does not require all, some configurations need to be same otherwise they would not work as expected. A good example for that is secret_key which should be same on the Webserver and Worker to allow Webserver to fetch logs from Worker.There are a number of ways you can do it - just read the docker-compose documntation on thathttps://docs.docker.com/compose/environment-variables. You can also see the "Quick start" docker compose from Airflow docs where we used anchors - which is bit more sphisticated wayhttps://airflow.apache.org/docs/apache-airflow/stable/start/docker.htmlJust note that the "quick start" should be just inspiration, it is nowhere near production setup and if you want to make your own docker compose you need to really get a deeper understanding of the docker compose - as warned in the note in our docs. | I have an docker compose file which spins up the local airflow instance as below:version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.6
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
- ${HOME}/.aws:/usr/local/airflow/.aws
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3I want to add some Airflow variables which the underlying dag uses eg: CONFIG_BUCKET.
I have added them as AIRFLOW_VAR_CONFIG_BUCKET=s3://foo-bucket
in environment section of web server but it does not seem to work. Any ideas how can I achieve this ? | How to add airflow variables in docker compose file? |
I was searching for a while until I checked containerd's code and found this withincmd/ctr/commands/run/run_unix.go:149 if uidmap, gidmap := context.String("uidmap"), context.String("gidmap"); uidmap != "" && gidmap != "" {
150 uidMap, err := parseIDMapping(uidmap)
151 if err != nil {
152 return nil, err
153 }
154 gidMap, err := parseIDMapping(gidmap)
155 if err != nil {
156 return nil, err
157 }which basically means:
You have to provide both, theuidmapANDthegidmap, otherwise it won't work.Running the above container again with$ sudo ctr run --uidmap 0:5000:4999 --gidmap 0:5000:4999 docker.io/library/test:latest testdid the trick.Within the container:ps -eo ruser,rgroup,comm
RUSER RGROUP COMMAND
root root sh
root root psOn the host:$ ps -eo uid,gid,cmd | grep /bin/sh
126 128 /bin/sh /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter
5000 5000 /bin/sh | To better understand how to use the--uidmapwithctr, I've created a test container by means of the following steps. Thecontainerdversion is1.4.3.Build and Run Container:Build Dockerfile$ cat Dockerfile
FROM alpine
ENTRYPOINT ["/bin/sh"]with$ docker build -t test .
Sending build context to Docker daemon 143.1MB
Step 1/2 : FROM alpine
---> d6e46aa2470d
Step 2/2 : ENTRYPOINT ["/bin/sh"]
---> Running in 560b09f9b287
Removing intermediate container 560b09f9b287
---> 8506bfeab109
Successfully built 8506bfeab109
Successfully tagged test:latestSave the image as tar ball$ docker save test > test.tarImport it with containerd'sctr$ sudo ctr i import test.tar
unpacking docker.io/library/test:latest (sha256:9f7dabf0e4feadbca9bdc180422a3f2cdd7b545445180a3c23de8129dc95f29b)...doneCreate and run the container$ sudo ctr run --uidmap 0:5000:4999 docker.io/library/test:latest testThe uid map should map the container internal uid of 0 (root) to 5000 corresponding to ctr's manpage:--uidmap="": run inside a user namespace with the specified UID mapping range; specified with the format container-uid:host-uid:lengthCheck UID in container and on host:Within the container:ps -eo ruser,rgroup,comm
RUSER RGROUP COMMAND
root root sh
root root psOn the host:$ ps -eo uid,gid,cmd | grep /bin/sh
126 128 /bin/sh /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter
0 0 /bin/shIssueIt seems to not work,/bin/shruns as root (uid=0) within the container as well as on the host. | run container with containerd's ctr by means of uidmap to map to non-root user on the host |
This isn't acondaissue, but a Docker issue. The layers in a Dcoker image are read-only; you can't modify them. When you create an image with something likeRUN conda do_something
RUN conda clean --all -ywhatever the first command added to the image isfixedin that layer. The subsequent command doesn't remove anything from theimage, only that layer.To avoid adding the tarballs to the image in the first place, you need to remove themimmediatelyduring the creation of that layer, after you are doing using them but before the layer is fixed in the image.RUN conda do_something && conda clean --all -y | I am bit confused by the conda commandconda clean --all -yinside a docker script.
Generally, the idea is to shrink the final docker image.conda clean --all -yshould help to delete downloaded tarballs, and indeed, the docker log shows:Will remove 430 (853.4 MB) tarball(s).However, the final image size is identical whether I includeconda clean --all -yor not. Do I additionally need to explicitly delete any files withrm -rfor how can you explain that the final image size is not different? | Docker and conda - how does "clean --all -y" work? |
This does not look like an issue with Docker, it looks like an issue with Maven. Maven requires an absolute path for system scope dependencies. You can test this is the case by commenting out all the lines of your Dockerfile below...
RUN mvn -f /home/app/pom.xml
# comment out everything below this, I think you'll still see the failureBy the way, why are you using backslashes not forward slashes for yoursystemPath? Your Maven system scoped dependencies are being interpreted as relative paths, not absolute paths. When you fix this, your build should work as intended. | In my maven pom file I have some dependencies which are our own jar files from other projects which are not in repository.We have used 'system' scoped dependencies like
efaadmin
efaadmin
system
1.0
${basedir}\src\main\webapp\WEB-INF\lib\efaadmin.jar
Now when writing Dockerfile these dependencies have become our stumbling block.#
# Build stage
#
FROM maven:3.6.1-jdk-8-slim AS BUILD
COPY src /home/app/src
COPY pom.xml /home/app
COPY jars/*.jar /home/app/jars/
RUN mvn -f /home/app/pom.xml
#
# Package stage
#
FROM tomcat:7.0-jdk8-openjdk-slim
ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH
COPY --from=build /home/app/target/DrySign.war $CATALINA_HOME/webapps/
COPY --from=build /home/app/target/jars/* $CATALINA_HOME/webapps/xxxxx/WEB-INF/lib/
EXPOSE 8080
CMD ["catalina.sh", "run"]But docker is complaining:'dependencies.dependency.systemPath' for efaadmin:efaadmin:jar must specify an absolute path but is ./jars/efaadmin.jarHow to deal with this? | How to add jars from systempath in Docker |
The problem is with docker networking.Add--network hostto the docker run command so that it is:cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink --network host local nThis fixes the issue. | Using docker-desktop on macOS.I'm trying to run a node following the instructions onthis page.The database name isnode, which is the same as the username:node. The user has access to the database and can log in usingpsqlclient.Connection strings I've tried in the .env file:postgresql://node@localhost/node
postgresql://node:password@localhost/node
postgresql://node:password@localhost:5432/node
postgresql://node:[email protected]:5432/node
postgresql://node:[email protected]/nodeWhen I run thestart command:cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink local n, using docker-desktop on macOS, I get the following stack trace:2020-09-15T14:24:41Z [INFO] Starting Chainlink Node 0.8.15 at commit a904730bd62c7174b80a2c4ccf885de3e78e3971 cmd/local_client.go:50
2020-09-15T14:24:41Z [INFO] SGX enclave *NOT* loaded cmd/enclave.go:11
2020-09-15T14:24:41Z [INFO] This version of chainlink was not built with support for SGX tasks cmd/enclave.go:12
2020-09-15T14:24:41Z [INFO] Locking postgres for exclusive access with 500ms timeout orm/orm.go:69
2020-09-15T14:24:41Z [ERROR] unable to lock ORM: dial tcp 127.0.0.1:5432: connect: connection refused logger/default.go:139 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Error
/chainlink/core/logger/default.go:117
...Does anyone know how I can resolve this? | Running a Chainlink Node - Can't connect to database |
You're correct, you need to pass in/this/isas a volume and the executable will write to that location.If you want to constrain the thing even more, you can pass/this/is/b.fileas a volume. You need to create it (simply viatouch) beforehand, otherwise Docker will consider it a directory and create it as such for you, but you'll know that the thing won't be able to create/this/is/c.fileor any other thing. | I just dockerized an executable that reads from a file and creates a new file in the very directory that file came from.I want to use Docker in that setup, so that I avoid installing numerous third-party libraries in the production environment.My problem now: I have file/this/is/a.fileon my underlying (host) file system and my executable is supposed to create/this/is/b.file.As far as I see it, the only chance to get this done is by mapping a volume that points to/this/isand then let the executable know where I mounted it to in the docker, container.Am I right? Or is there a way that I just passdocker run mydockerizedstuff /this/is/a.filewithout using Docker volumes? | Dockerized executable read/write on host filesystem |
Setting a secret only exposes that value at a filesystem location under/run/secrets. If you want to get that value into a variable, you would need to do that yourself as part of your container startup.For example, anENTRYPOINTscript like that this would make/run/secrets/usernameavailable asDB_USERNAME:#!/bin/sh
if [ -f /run/secrets/username ]; then
export DB_USERNAME=$(cat /run/secrets/username)
fi
exec "$@" | I have a simple docker compose that makes use of a secret. However I have been unable to access the secret. The logs show the/run/secrets/usernamebeing passed in the server but not the actual username. What's wrong with my setup? How do I get the secret value from DB_USERNAME within my service?version: "3.9"
services:
...
bank-microservice:
image: ${IMAGE_BANK}
restart: on-failure
networks:
- backend
expose:
- 80
secrets:
- username
environment:
- DB_USERNAME=/run/secrets/username
env_file:
- ./env/microservice.env
depends_on:
- db
secrets:
username:
file: ./secrets/username
... | How do I configure a secret in docker compose? |
Your redirection operator is working on host and not inside your container. Change below$ /usr/bin/docker exec -t $CI_PROJECT_NAME echo $KUBE_ADMIN_ACCOUNT > /opt/application/conf/kubeadminaccount.ymlto$ /usr/bin/docker exec -t $CI_PROJECT_NAME bash -c "echo $KUBE_ADMIN_ACCOUNT > /opt/application/conf/kubeadminaccount.yml" | Using GitLab-CI, I am attempting to echo a secret variable into a file inside a Docker container. The file exists and the user has permissions to write to the file yet I get aNo such file or directoryerror.$ /usr/bin/docker exec -t $CI_PROJECT_NAME ls -la /opt/application/conf/kubeadminaccount.yml
-rw-rw-r-- 1 nodeuser nodeuser 420 Aug 18 07:19 /opt/application/conf/kubeadminaccount.yml
$ /usr/bin/docker exec -t $CI_PROJECT_NAME whoami
nodeuser
$ /usr/bin/docker exec -t $CI_PROJECT_NAME echo $KUBE_ADMIN_ACCOUNT > /opt/application/conf/kubeadminaccount.yml
bash: line 69: /opt/application/conf/kubeadminaccount.yml: No such file or directory | File not found in Docker Container using GitLab-CI |
With the help of thisSO answerand the comments I was able to get it working. If my answer doesn't help you I suggest you look at that one when running Nginx in a docker container.For me it was moving theinclude /etc/nginx/mime.types;and addingsendfile on;outside myserverblock and in thehttpblockMy example.com.conf now looks like this:user nginx;
events {
worker_connections 4096; ## Default: 1024
}
http {
include /etc/nginx/mime.types;
sendfile on;
server {
listen 80;
listen [::]:80;
server_name example.com;
root /etc/nginx/html;
index index.html;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
try_files $uri /index.html =404;
}
}
} | When uploading static files to my server using Nginx as the web server my css, javascript, and google fonts are not working as they do when testing the site on localhost.I'm running Nginx in a docker container using the base image.DockerfileFROM nginx
COPY example.com.conf /etc/nginx/nginx.conf
COPY build /etc/nginx/html/nginx.confuser nginx;
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80;
listen [::]:80;
server_name example.com;
include /etc/nginx/mime.types;
root /etc/nginx/html;
index index.html;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
try_files $uri /index.html =404;
}
}
}Can someone tell me whats wrong with my conf?Also when viewed on Chrome the console logs thisResource interpreted as Stylesheet but transferred with MIME type text/plainSome other SO post I looked at:SO postSO post | docker nginx not loading css styles |
You can try to use this github action:https://github.com/trilom/file-changes-actionGo over the docs to see how to use it. But basically an example would be similar to this:- name: Get file changes
id: get_file_changes
uses: trilom/[email protected]with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
plaintext: true
- name: Echo file changes
run: |
echo Changed files: ${{ steps.get_file_changes.outputs.files }}
- name: do something on the changed files ussing ${{ steps.get_file_changes.outputs.files }}
.
.
.Hope that helps | Consider we Have 10 Docker files but i made some changes only in 1 Docker file.so In Github action we generally build all 10 docker files instead of 1 docker file.
So Is there any way to write conditions such that github actions should build that particular dockerfile which we made changes. | Build Specific Dockerfile from set of dockerfiles in github action |
You are running the container from the same directory where your folders are (the ones you are mounting). This means that the path should be prefixed with the current working directory:docker run -p 4242:4242 -p 8042:8042 --rm --name orthanc -v $(pwd)/orthanc/orthanc.json:/etc/orthanc/orthanc.json -v $(pwd)/orthanc/orthanc- db:/var/lib/orthanc/orthanc-db jodogne/orthanc-plugins /etc/orthanc -- verbose | I would like to start the orthanc server based on the below docker command. However when I execute the command, I get the error as shown below.Please note that both the orthanc.json and orthanc-db are present in the respective folders/orthanc/orthanc.json- orthanc.json is present under orthanc folder/orthanc/orthanc-db- orthanc-db is present under orthanc folder/etc/orthanc/orthanc.json- orthanc.json is present under /etc/orthanc folder/var/lib/orthanc/orthanc-db- orthanc-db is present under /var/lib/orthanc folderAll the paths listed above are valid. I am able to navigate to themDocker command to start orthanc serverdocker run -p 4242:4242 -p 8042:8042 --rm --name orthanc -v
/orthanc/orthanc.json:/etc/orthanc/orthanc.json -v /orthanc/orthanc-
db:/var/lib/orthanc/orthanc-db jodogne/orthanc-plugins /etc/orthanc --
verboseError message after executing the commandError response from daemon: OCI runtime create failed:
container_linux.go:345: starting container process caused "process_lin
ux.go:424: container init caused \"rootfs_linux.go:58: mounting
\\\"/orthanc/orthanc.json\\\" to rootfs \\\"/var/lib/docker/overlay2/
48131fde47610cf1bac93d0316e2c1d6dfbfdb90a0e6cc24344cc6a1308eaccd/merged\
\\"at \\\"/var/lib/docker/overlay2/48131fde47610cf1bac93d031
6e2c1d6dfbfdb90a0e6cc24344cc6a1308eaccd/merged/etc/orthanc/orthanc.json\
\\"caused \\\"not a directory\\\"\"": unknown: Are you tryin
g to mount a directory onto a file (or vice-versa)? Check if the
specified host path exists and is the expected type.Can you please help me fix this issue? I am trying to start the orthanc server through this docker command. not sure why it's throwing an error when the files are present. | Unable to start the server using docker command - Mount directory -OCI Runtime error |
It's not a Docker issue it's a Java issue. There are several ways to define classpath entries to run an executable jar.Shaded or Uber jar approachIn this case you sould create a shaded jar which contains all dependent classes in one executable jar file. Maven has a plugin calledApache Maven Shade Pluginto create that uber-jar artifact.Finally just run:java -jar shaded-artifact.jaror in DockerCMD ["java", "-jar", "shaded-artifact.jar"]Command line classpath approachIf the created jar artifact requires the existence of other (dependent) jars you have to specify the classpath. In this case copy all required jars into a folder e.g.liband use the following command:java -cp ':' As you can see either wildcard (*) character and multiple classpath element is allowed separated by:sojava -cp 'Customer.jar:libs/*' com.mycompany.Customerin DockerCMD ["java", "-cp", "Customer.jar:libs/*", "com.mycompany.Customer"]Classpath in MANIFEST approachAfter you collecteded all those dependent artifact into a folder then you just add aClass-Pathentry into yourMETA-INF/MANIFEST.MFfile like this:Class-Path: . lib/*and runjava -jar Customer.jaror in DockerCMD ["java", "-jar", "Customer.jar"]Which of those is the best depends on so many things, you have to choose.Edit:Based on updated question it seems the uber jar was created by assembly plugin usingjar-with-dependenciespredefined descriptor. This will create another jar file which placed under target (output) folder and its name ends with-jar-with-dependencies.jarUse that jar insead of basic artifact.Do a double check to make sure all ofentries point to an existing class. You've mentioned three different main class in the same question.com.companyname.Customercom.mycompany.Customercom.company.customers.CustomerPay attention for both of Linux and Java are case sensitive. On this basis the class name must beCustomerexactly and all folder names must be lowercase.Hope it helps. | I'm running a maven project in a docker container, I'm getting Could not find or load main class error.FROM maven:3.6.0-jdk-11-slim AS build
COPY src src
COPY pom.xml .
RUN mvn -f pom.xml clean package install
FROM openjdk:8-jre
COPY --from=build /target /opt/target
WORKDIR /target
RUN ls
CMD ["java", "-jar", "Customer.jar"]above assembly was created with following plugin in
maven-assembly-plugin
com.companyname.Customer
jar-with-dependencies
make-assembly
package
single
ErrorError: Could not find or load main class com.mycompany.CustomerQuestion: How do you set class path to a jar file in docker?EditI tested with the following but same issue.CMD ["java", "-cp", "Customer.jar:libs/*", "com.company.customers.Customer"]Error: Could not find or load main class
com.company.customers.Customer | docker with maven jar |
Since the ffmpeg package is not available with yum package manager, I have manually installed ffmpeg and made it part of the container. Here are the steps:Downloaded the static build fromhere(the build for thepublic.ecr.aws/lambda/python:3.8 image is ffmpeg-release-amd64-static.tar.xzHereis a bit more info on the topic.Manually unarchived it in the root folder of my project (where my Dockerfile and app.py files are). I use a CodeCommit repo but this is not mandatory of course.Added the following line my Dockerfile:COPY ffmpeg-5.1.1-amd64-static /usr/local/bin/ffmpegIn therequirements.txtI added the following line (so that the python package managed installs ffmpeg-python package):ffmpeg-pythonAnd here is how I use it in my python code:import ffmpeg
...
process1 = (ffmpeg
.input(sourceFilePath)
.output("pipe:", format="s16le", acodec="pcm_s16le", ac=1, ar="16k", loglevel="quiet")
.run_async(pipe_stdout=True, cmd=r"/usr/local/bin/ffmpeg/ffmpeg")
)Note that in order to work, in the run method (or run_async in my case) I
needed to specify the cmd property with the location of the ffmpeg
executable.I was able to build the container and the ffmpeg is working properly for me.FROM public.ecr.aws/lambda/python:3.8
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
COPY input_files ./input_files
COPY ffmpeg-5.1.1-amd64-static /usr/local/bin/ffmpeg
RUN chmod 777 -R /usr/local/bin/ffmpeg
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.lambda_handler" ] | I'm trying to install ffmpeg on docker for amazon lambda function.
Code for Dockerfile is:FROM public.ecr.aws/lambda/python:3.8
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
# Install the function's dependencies using file requirements.txt
# from your project folder.
COPY requirements.txt .
RUN yum install gcc -y
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
RUN yum install -y ffmpeg
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]I am getting an error:> [6/6] RUN yum install -y ffmpeg:
#9 0.538 Loaded plugins: ovl
#9 1.814 No package ffmpeg available.
#9 1.843 Error: Nothing to do | install ffmpeg on amazon ecr linux python |
Create your own airflow.cfg (Assume it is stored in./config/airflow.cfg) and followAirflow Security Guideto define credential.Then, mount your config file to docker container can help you, add./config/airflow.cfg:/usr/local/airflow/airflow.cfgto yourwebserverdocker composeExample:volumes:
- ./dags:/usr/local/airflow/dags
- ./config/airflow.cfg:/usr/local/airflow/airflow.cfg | I am usingpuckel/docker-airflowto deploy airflow.
Currently, the webserver is not asking for any credentials to login.
How can I add a user to it? Maybe i have to add some environment variable in docker-compose.yml, but i am unable to find it. The docker-compose file ishereThanks in advance. | Enable credentials in puckel/docker-airflow |
Well, in the end the solution wasmuch simplerthan I expected.I started from the base where I mount the docker socket inside the container (I know that this practice is not recommended, but in my case, I know that it does not pose security problems), using the command in docker-compose:volumes:
- /var/run/docker.sock:/var/run/docker.sockThen, it was as simple as using theDocker library for python, which gives a complete SDK through that socket that allowed me to restart the container inside the python script in an ultra-simple way.import docker
[...]
docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock')
docker_client.containers.get("container_name").restart() | My Objective:I want to be able to restart a container based on the official Python Image using some command inside the container.My system:I have a own Docker image based on the official python image which look like this:FROM python:3.6.15-buster
WORKDIR /webserver
COPY requirements.txt /webserver
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip3 install -r requirements.txt --no-binary :all:
COPY . /webserver
ENTRYPOINT ["./start.sh"]As you can see, the image does not execute a single python file but it executes a script called start.sh, which looks like this:#!/bin/bash
echo "Starting"
echo "Env: $ENTORNO"
exec python3 "$PATH_ENTORNO""Script1.py" &
exec python3 "$PATH_ENTORNO""Script2.py" &
exec python3 "$PATH_ENTORNO""Script3.py" &All of this works perfectly, but, I want that if, for example, script 3 fails, the entire container based on this image get restarted.My approach:I had two ideas about this problem. First, try to execute a reboot command in the python3 script, something like this:from subprocess import call
[...]
call(["reboot"])This does not work inside the Python Debian image, because of error:reboot: command not foundThe other approach was to mount the docker.sock inside the container, but the error this time is:root@MachineName:/var/run# /var/run/docker.sock docker ps
bash: /var/run/docker.sock: Permission deniedI dont know if I'm doing right these two approach, or if anyone has any idea about this but any help will be very appreciated. | How to restart Python Docker Container from inside |
YourDockerfileonly copiesrequirements.txtandapp.pyinto the image. In order for the dockerizedapp.pyto have access totemplatesand its contents, you need to copytemplatesas well by adding the line:COPY templates /app/ | New to docker and trying to run a flask mysql app but getting a jinja2.exceptions.TemplateNotFound: index.html . No errors if I runpython app.pyoutside of dockerDirectory structure-docker-compose.yml
-app
-templates
-index.html
-app.py
-Dockerfile
-requirements.txt
-db
-init.sqldocker-compose.ymlversion: "2"
services:
app:
build: ./app
links:
- db
ports:
- "5000:5000"
db:
image: mysql:5.7
ports:
- "32000:3306"
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./db:/docker-entrypoint-initdb.d/:roDockerfileFROM python:3.6
EXPOSE 5000
WORKDIR /app
COPY requirements.txt /app
RUN pip install -r requirements.txt
ENV IN_DOCKER_CONTAINER Yes
COPY app.py /app
CMD python app.pyrequirements.txt:Flask==1.0.2
Jinja2==2.10
gunicorn==19.6.0
flask-mysqlpart of my app.py:@app.route('/')
def index():
conn = mysql.connect()
cursor = conn.cursor()
try:
query = '''SELECT * from favorite_colors'''
cursor.execute(query)
data = cursor.fetchall()
except Exception as e:
return str(e)
finally:
cursor.close()
conn.close()
return render_template('index.html', MyExampleVar=str(data)) | Docker flask - jinja2.exceptions.TemplateNotFound: index.html |
The solution was like Kootli suggested in the comments to usehttpsinstead oftcpas protocol.Engine API URL:https://myhost:2376 | I want to use theDocker integration in IntelliJto connect to a protected remote Docker socket:As you can see in the above picture I'm getting the following error:Cannot connect: java.io.IOException: Channel disconnected before any data was receivedWhen I set the Dockerenvironment variablesDOCKER_TLS_VERIFY=1,DOCKER_HOST=tcp://myhost:2376,DOCKER_CERT_PATH=/path/to/certs/to the same values as in the IntelliJ configuration and try to connect via terminal its working perfectly.Does anyone know what's causing this error and how I can fix it? | IntelliJ cannot connect to protected tcp Docker socket |
Remember that arguments after the container image name are simply passed to the ENTRYPOINT script. So you can write:docker run --entrypoint my-script.sh my-image:latest arg1 arg2For example, if I havemy-script.sh(mode0755) containing:#!/bin/sh
for arg in "$@"; do
echo "Arg: $arg"
doneAnd a Dockerfile like this:FROM docker.io/alpine:latest
COPY my-script.sh /usr/local/bin/
ENTRYPOINT ["date"]Then I can run:docker run --rm --entrypoint my-script.sh my-image arg1 arg2And get as output:Arg: arg1
Arg: arg2If you want to run an arbitrary sequence of shell commands, you can of course do this:docker run --rm --entrypoint sh my-image \
-c 'ls -al && my-script.sh arg1 arg2' | I am trying to override the entrypoint in adockerimage with a script execution that accepts arguments, and it fails as follows▶ docker run --entrypoint "/bin/sh -c 'my-script.sh arg1 arg2'" my-image:latest
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh -c 'myscript.sh arg1 arg2'": stat /bin/sh -c 'my-script.sh arg1 arg2': no such file or directory: unknown.However when I exec to the container, the above command succeeds:▶ docker run --entrypoint sh -it my-image:latest
~ $ /bin/sh -c 'my-script.sh arg1 arg2'
SuccessAm I missing sth in the syntax? | Εxecute commands with args and override entrypoint on docker run |
Okie, so have the repo helped to fix the issue.Issue #1 - www.conf being copied in apache containerYou had below statement in your apache container DockerfileCOPY ./www.conf /usr/local/etc/php-fpm.d/www.confThis is actually intended for the php container which will be running php-fpm and not the apache containerIssue #2 - Socket was never being createdYour volume bind- "./php7.2-fpm.sock:/run/php/php7.2-fpm.sock"was creating the socket and they were not being created by php-fpm as such. So you created a blank file and trying to connect to it won't do anythingIssue #3 - No config in php to create socketThe docker container by default create listen to0.0.0.0:9000inside the fpm container. You needed to override thezz-docker.conffile inside the container to fix the issue.zz-docker.conf[global]
daemonize = no
[www]
listen = /run/php/php7.2-fpm.sock
listen.mode = 0666Updated docker fileFROM php:7.2-rc-fpm-alpine
LABEL maintainer="Eakkapat Pattarathamrong ([email protected])"
RUN docker-php-ext-install \
sockets
RUN set -x \
&& deluser www-data \
&& addgroup -g 500 -S www-data \
&& adduser -u 500 -D -S -G www-data www-data
COPY php-fpm.d /usr/local/etc/php-fpm.d/Issue #4 - Sockets being shared as volumes to hostYou should be sharing sockets using a named volume, so the socket should not be on host at all.Updated docker-compose.ymlversion: "2"
services:
php:
build: "./php"
container_name: "php"
volumes:
- "./code:/usr/local/apache2/htdocs"
- "phpsocket:/run/php"
apache2:
build: "./apache2"
container_name: "apache2"
volumes:
- "./code:/usr/local/apache2/htdocs"
- "phpsocket:/run/php"
ports:
- 7080:80
links:
- php
volumes:
phpsocket:After fixing all the issues I was able to get the php page working | I try to set up Apache2 and PHP-FPM via unix socket but result is(111)Connection refused: AH02454: FCGI: attempt to connect to Unix domain socket /run/php/php7.2-fpm.sock (*) faileddocker-compose.ymlversion: "2"
services:
php:
build: "php:7.2-rc-alpine"
container_name: "php"
volumes:
- "./code:/usr/local/apache2/htdocs"
- "./php7.2-fpm.sock:/run/php/php7.2-fpm.sock"
apache2:
build: "httpd:2.4-alpine"
container_name: "apache2"
volumes:
- "./code:/usr/local/apache2/htdocs"
- "./php7.2-fpm.sock:/run/php/php7.2-fpm.sock"
ports:
- 80:80
links:
- phpwww.conflisten = /run/php/php7.2-fpm.sockhttpd-vhosts.conf
SetHandler "proxy:unix:/run/php/php7.2-fpm.sock|fcgi://localhost/"
But it's work when connect via TCP.www.conflisten = 127.0.0.1:9000httpd-vhosts.conf
SetHandler "proxy:fcgi://php:9000"
| How to set up Apache2 and PHP-FPM via unix socket? |
So I think I got it.It wasreallysimple. In my wordpress portion of my .yml file I needed to includeWP_DB_NAME: testdatabaseBy doing that, it used my named testdatabase to install wordpress to.Hope this helps people who might stumble across this.Now the .yml file looks like this:version: '3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql2
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: somerootwordpresspw
MYSQL_DATABASE: testdatabase
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
volumes:
- ./WP-TEST/:/var/www/html/
depends_on:
- db
image: wordpress:latest
ports:
- "80:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_NAME: testdatabase
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data: | I'm new to docker all together - but am trying to setup a local test environment to play with some wordpress things.So I went to the docker site and pulled up a default docker .yml file on how to get it going easily.I've made just a couple changes, but mostly this is a straight forward document.version: '3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql2
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: somerootwordpresspw
MYSQL_DATABASE: testdatabase
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
volumes:
- ./WP-TEST/:/var/www/html/
depends_on:
- db
image: wordpress:latest
ports:
- "80:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data:When I rundocker-compose upwith the above .yml file, I see this error:MySQL "CREATE DATABASE" Error: Access denied for user 'wordpress'@'%' to database 'wordpress'Which I find odd, because I'm naming the databasetestdatabase, so why is it trying to create a database named wordpress?When I connected with SQL Pro, I could seetestdatabase, but according to the console it's trying to createwordpressdb.How do I get it to connect to my named DB, instead of constantly failing to createwordpress? | Issue getting docker to access my database properly with wordpress |
You're trying to run two separate programs, so run them in two separate containers.# Dockerfile.app
FROM node:16.14.2
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
ENV HOST 0.0.0.0
ENV NODE_ENV production
EXPOSE 8080
CMD npm run start# Dockerfile.nginx
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.templateYou might use a system like Docker Compose to run the two parts together:# docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile.app
nginx:
build:
context: .
dockerfile: nginx
ports:
- 8080:80Runningdocker-compose up -dwill start both containers together. In your Nginx configuration, make sure toproxy_pass http://app:8080, using the Compose service name and the port number the service is listening on, to forward requests to the other container.(The Nginx Dockerfile looks short, but it's correct. The Docker Hubnginximage already knows how to run theenvsubstline from your script in its own entrypoint script and it has a correct default command already.)There's two basic problems in the setup you show in the question, both related to trying to run two programs in the same container. The first is that you can't merge images, having a secondFROMline makes Docker start over from the new base image. (So your final image containsonlyNginx, not Node or your built application, hence thenpm not founderror.) The second you'll run into is that your script will start your application, but not start the Nginx proxy until after the application exits. There are some common workarounds to this (like using a background process) but it essentially results in one process or the other being unmonitored by Docker, so your application could potentially fail and Docker wouldn't notice it to be able to restart it. | I am not sure what I may be doing wrong but I have the followingscript.shfile sitting at the root of my project:script.sh#!/bin/sh
npm run start
envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf
nginx -g 'daemon off;'Then I referenced the above script in myDockerfileas shown below:Dockerfile# Build environment
FROM node:16.14.2
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
# server environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
ENV HOST 0.0.0.0
ENV NODE_ENV production
EXPOSE 8080
COPY script.sh /
RUN chmod +x /script.sh
ENTRYPOINT ["/script.sh"]After building the Docker image successfully, I attempted to run it as a container but all I keep getting back is the following error:/script.sh: line 2: npm: not foundI expect that the script should be able to pick up the already installed npm from the environment.What can I do differently to make this work? | NPM not found when using npm run start command within shell script from a docker container |
This is what you've missed:services:
appserver:
build_as_root:
- apt-get update -y
- apt-get install libmcrypt-dev
- pecl install mcrypt-1.0.1
- docker-php-ext-enable mcryptYou can use the following:name: myapp
recipe: drupal7
config:
webroot: web
php: '7.2'
proxy:
pma:
- pma.myapp.lndo.site
services:
pma:
type: phpmyadmin
appserver:
build_as_root:
- apt-get update -y
- apt-get install libmcrypt-dev
- pecl install mcrypt-1.0.1
- docker-php-ext-enable mcrypt | Following example inHow to install mcrypt on DockerI came to this:name: myapp
recipe: drupal7
config:
webroot: web
php: '7.2'
proxy:
pma:
- pma.myapp.lndo.site
services:
pma:
type: phpmyadmin
appserver:
extras:
- "apt-get update -y"
- "apt-get install libmcrypt-dev"
- "pecl install mcrypt-1.0.1"
- "docker-php-ext-enable mcrypt"After rebuilding I see:$ lando php -m | grep mcrypt
mcryptBut in my web application when I look at the page with phpinfo(), then there is no trace of mcrypt. Please help me out to install php-mcrypt correctly. | How to install php-mcrypt in lando with php 7.2? |
You are extremely close. What I would add is, that you have a host specific.envfile, seeEnvironment variables in Compose, on each computer, in the same folder as thedocker-compose.yml, withDATA_PATH=/mnt/shared/dataor whatever value forDATA_PATHyou like. Just add that.envto your.gitignore, so that every host keeps his own config off the repository and that's it. | I'm working on a project with multiple collaborators; to share code and compute environment, we've setup a github repository which includes aDockerfileanddocker-compose.ymlfile. I can work on code and my collaborators can just pull the repository, rundocker-compose upand have access to my jupyter notebooks in the same environment that I develop them.The only problem with this is that, because we are working at different sites, the data that we are computing over is in different locations. So on my end, I want mydocker-compose.ymlto include:volumes:
- /mnt/shared/data:/datawhile my collaborators need it to say something likevolumes:
- /Volumes/storage/data:/dataI get that one way to do this would be to use an environment variable; in thedocker-compose.ymlfile:volumes:
- "$DATA_PATH":/dataThis forces them to run something like:DATA_PATH=/Volumes/storage/data docker-compose upAs a solution, this isn't necessarily a problem, but it feels clunky to me, and fails to be self-documenting in the repository. I can wrap docker-compose in a shell script (a potential solution to almost any problem), but this also feels clunky. I can't help but suspect that there's a better solution here. Does docker-compose allow for this kind of functionality? Is there a best-practices way of accomplishing this? If not, I'm curious if anyone knows what the motivation behind excluding this functionality might be and/or why it isn't considered a good idea.Thanks in advance. | How to specify site-specific volumes for docker-compose |
AFAIK there is no such option as of now. Each node is responsible of its own cleanup. There is a commanddocker system prune -fthat you can use to clear container data.But tagged images can be deleted usingdocker rmionly. See below issueshttps://github.com/moby/moby/issues/24079 | On the local host, I can remove an image using eitherdocker image rmordocker rmi.What if my current host is a manager node in a Docker swarm and I wish to cascade this operation throughout the swarm?When I first created the Docker service, the image was pulled down on each node in the swarm. Removing the service did not remove the image and all nodes retain a copy of the image.It feels natural that if there's a way to "push" an image out to all the nodes then there should be an equally natural way to remove them too without having to SSH into every single machine :'( Plus, this is a real problem. Sooner or later the nodes are bound to have no more disk space! | How to remove an image across all nodes in a Docker swarm? |
I've found a way to fix that authentication issue. I need to add following command--default-authentication-plugin=mysql_native_passwordandMYSQL_ROOT_PASSWORD=ppshein123456ondocker-composefile.command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=ppshein123456
- MYSQL_ALLOW_EMPTY_PASSWORD=yes | NodeJS cannot connect to MySQL latest version or either 8 onwards and encountered following error message:ERROR: connect ECONNREFUSED 172.21.0.2:3306Here is my docker-compose fileversion: '2.1'
services:
db:
build: ./db
networks:
- ppshein
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
healthcheck:
test: "exit 0"
node:
build: ./app
depends_on:
db:
condition: service_healthy
ports:
- 3000:3000
networks:
- ppshein
networks:
ppshein:Here is db DockerfilesFROM mysql:5
COPY init_db.sql /docker-entrypoint-initdb.d/init_db.sqlCREATE DATABASE IF NOT EXISTS database_docker;
GRANT ALL PRIVILEGES on database_docker.*
TO 'root'@'%' IDENTIFIED BY 'ppshein123456'
WITH GRANT OPTION;NodeJS DockerfileFROM node:9.10.1
ENV NODE_ENV=docker
COPY ./ /var/www
WORKDIR /var/www/
RUN yarn install && yarn add sequelize-cli -g
EXPOSE 3000
ENTRYPOINT [ "npm", "run", "docker" ]Config.json"docker": {
"username": "root",
"password": "ppshein123456",
"database": "database_docker",
"host": "db",
"dialect": "mysql",
"logging": false
}But everything is working file when I've changed toFROM mysql:5butFROM mysqlorFROM mysql:8, I've encountered above error I've mentioned. Please let me know which kind of configuration do I need to miss it? | NodeJS could not connect to MYSQL latest version inside Docker Container |
According to thedocumentation, options should precede the image name.$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]please try the following:docker run -p 5984:5984 -d -v couchdbvolume:/opt/couchdb/data --name some-couchdb couchdb | In my ubuntu 18.04, installed couch db usingthis repo. In order to data persistance, i have created docker volume using the commanddocker volume create --name couchdbvolume.I useddocker run -p 5984:5984 -d couchdb -v couchdbvolume:/opt/couchdb/data --name some-couchdbcommand to create new docker process. Instead of using existing volume, every time docker creates new volume. So i loss data in every restart.As perthis question, un-named volumes are created if the docker file doesn't have name in volume keyword. I think because ofthis linethe volume doesn't have name. so it creates un-named volume.Instead of multiple docker volume,
I expect, only one docker volume(i have only one couchdb docker image) | Instead of using existing docker volume, couchdb docker image always create new volume |
Appart from the official "solution" from Intelillj I find it easier with this workaround:Help -> Find Action -> RegistryDisablepython.use.targets.apiTry to configure the interpreter againThere is an official solution from Intelillj that you can check here:https://intellij-support.jetbrains.com/hc/en-us/community/posts/6870884026898-Pycharm-2022-2-upgrade-and-Docker-issues | I have Pycharm 2022 and when configuring a docker Python interpreter, Pycharm is not able to find the remote docker service, it seems that it cannot find it although the service is running (and I have the pro license):Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-11-07 16:22:43 CET; 22h ago TriggeredBy: ● docker.socket
Docs: https://docs.docker.com Main PID: 1582 (dockerd)
Tasks: 12
Memory: 71.7M
CGroup: /system.slice/docker.service
└─1582 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockI would appreciate any help regarding this. | Pycharm 2022 cannot connect to the docker service. It does not find it |
Found the magic. I don't completely follow what's going on here but I'll try to explain itFROM busybox
ENV AVAR=hello
ENV AVAR2=world
ENTRYPOINT ["/bin/sh", "-c", "echo $(eval echo $@)", "$@"]
CMD ["${AVAR}", "${AVAR2}"]
docker run -it --rm test
> hello world
docker run -it --rm test world
> worldMy attempt at explanation (I'm really not sure if this is right):CMDin exec form is passed as an argument toENTRYPOINTwithout shell substitution. I'm taking those values and passing them as positional arguments to/bin/sh -c ...which is why I need the "extra" "$@" at the end of theENTRYPOINTarray.WithinENTRYPOINTI need to expand$@and do parameter substitution on the result of the expansion. So in a subshell ($(...)) I callevalto do parameter substitution and then echo the result which ends up just being the contents ofCMDbut with variables substituted.If I pass in an argument todocker runit simply takes place ofCMDand is evaluated correctly. | I'm trying to useENTRYPOINTandCMDsuch thatENTRYPOINTis the script I am calling andCMDprovides the default arguments to theENTRYPOINTcommand but will be overridden by any arguments given todocker run.The part I'm struggling with is how to have environment variable expanded in my default arguments usingCMD.For example. Given this dockerfile built as tagtest:FROM busybox
ENV AVAR=hello
ENTRYPOINT ["/bin/sh", "-c", "exec echo \"$@\""]
CMD ["${AVAR}"]I am expecting the following results:docker run -it --rm test
> hello
docker run -it --rm test world
> worldNote:I'm just usingechohere as an example. In my actual Dockerfile I'll be calling./bin/somescript.shwhich is a script to launch an application I have no control over and is what I am trying to pass arguments to.This questionis similar but is asking about expanding variables in theENTRYPOINT, I'm trying to expand variables inCMD.I've tried many combinations ofshell/execform for bothENTRYPOINTandCMDbut I just can't seem to find the magic combination:FROM busybox
ENV AVAR=hello
ENTRYPOINT ["/bin/sh", "-c", "exec echo \"$@\""]
CMD ${AVAR}
docker run -it --rm test
> -c ${AVAR}Is what I'm trying to do possible?Many more failed attemptsThis is the closest I can get:FROM busybox
ENV AVAR=hello
ENV AVAR2=world
ENTRYPOINT ["/bin/sh", "-c", "echo $@", "$@"]
CMD ["${AVAR}", "${AVAR2}"]This works fine when I pass in an argument to the run command:docker run -it --rm test world
> worldBut it doesn't expand the default arguments when not given a command:docker run -it --rm test
> ${AVAR} ${AVAR2} | How do I pass default CMD to ENTRYPOINT with variable expansion? |
Just as a reference, this post answers the question submittedherewhere a policy organization does not allow external internet access, limiting package installation and/or using different Python verions in Vertex AI. | How to create a custom Docker container to use with Google Workbench and connect to Proxy?Create the following DockerfileFROM python:3.11.3-bullseye
# Install JupyterLab and any other required packages
RUN pip install jupyter -U && pip install jupyterlab
# Expose the JupyterLab port
EXPOSE 8080
ENV pwd=""
ENTRYPOINT exec jupyter-lab --no-browser --ip=0.0.0.0 --port=8080 --port-retries=0 --allow-root --NotebookApp.token="$pwd" --NotebookApp.password="$pwd" --ServerApp.allow_origin="*" --ServerApp.root_dir="/home/jupyter" --ServerApp.allow_origin_pat="(https?://)?[0-9a-z]+-dot-[\-0-9a-z]*\.notebooks\.googleusercontent\.com" --ServerApp.disable_check_xsrf=True --ServerApp.allow_remote_access=TrueBuild and push containerPROJECT_ID=""
CONTAINER_NAME
CONTAINER_URL=gcr.io/${PROJECT?}/${CONTAINER_NAME?}:dev`
docker build -t ${CONTAINER_URL} .
docker push ${CONTAINER_URL}Create a new User Managed Notebook with a Custom container using${CONTAINER_URL} | How to install a custom container with latest Python + JupyterLab version? |
ADDonly decompresses local tar files, not necessarily compressed single files. It may work to package the contents in a tar file, even if it only contains a single file:ADD ./data/databases/file.tar.gz /data/databases/(cd data/databases && tar cvzf file.tar.gz file.db)
docker build .If you're using the first approach, you must use a multi-stage build here. The problem is that eachRUNcommand generates a new image layer, so the resulting image is always the previous layerpluswhatever changes theRUNcommand makes;RUN rm a-large-filewill actually result in an image that's slightly larger than the image that contains the large file.TheBusyBoxtool set includes, among other things, an implementation ofunzip(1), so you should be able to split this up into a stage that just unpacks the large file and then a stage that copies the result in:FROM busybox AS unpack
WORKDIR /unpack
COPY data/databases/file.db.zip /
RUN unzip /file.db.zip
FROM python:3.8-slim
COPY --from=unpack /unpack/ /data/databases/In terms of the Docker image any of these approaches will create a single very large layer. In the past I've run into operational problems with single layers larger than about 1 GiB, things likedocker pushhanging up halfway through. With the multi-stage build approach, if you have multiple files you're trying to copy, you could have severalCOPYsteps that break the batch of files into multiple layers. (But if it's a single SQLite file, there's nothing you can really do.) | I'm trying to uncompress a file and delete the original, compressed, archive in myDockerfileimage build instructions. I need to do this because the file in question is larger than the2GBlimit set by Github on large file sizes (seehere). The solution I'm pursuing is to compress the file (bringing it under the2GBlimit), and then decompress when I build the application. I know it's bad practice to build large images and plan to integrate a external database into the project but don't have time now to do this.I've tried various options, but have been unsuccessful.Compress the file in.zipformat and useapt-getto installunzipand then decompress the file withunzip:FROM python:3.8-slim
#install unzip
RUN apt-get update && apt-get install unzip
WORKDIR /app
COPY /data/databases/file.db.zip /data/databases
RUN unzip /data/databases/file.db.zip && rm -f /data/databases/file.db.zip
COPY ./ ./This fails withunzip: cannot find or open /data/databases/file.db.zip, /data/databases/file.db.zip.zip or /data/databases/file.db.zip.ZIP.I don't understand this, as I thoughtCOPYadded files to the image.Following thisadvice, I compressed the large file withgzipand tried to use theDockernativeADDcommand to uncompress it, i.e.:FROM python:3.8-slim
WORKDIR /app
ADD /data/databases/file.db.gz /data/databases/file.db
COPY ./ ./While this compiles without error, it does not decompress the file, which I can see usingdocker exec -t -i clean-dash /bin/bashto explore the image directory structure. Since the large file is agzipfile, my understanding isADDshoulddecompress it, i.e. from thedocs.How can I solve these requirements? | Unzip local file and delete original in Dockerfile image build |
With Docker, it will manage the/etc/hostsfor you when you execute the Docker CLIdocker run, seeManaging /etc/hosts:Your container will have lines in /etc/hosts which define the hostname
of the container itself as well as localhost and a few other common
things.And for Azure Container Instance, specify a command line when you create a container instance to override the command line baked into the container image. This is similar to the--entrypointcommand-line argument todocker run. The container instance would terminate after executing the command. For more details, seeCommand line override.I suggest you can make an interactive shell with the container instance through the CLI commandaz container exec containerName --exec-command "/bin/sh"if the image has the/bin/shand the container instance has the public IP.And if you have more confixed actions with the container, maybeAzure Kubernetes Serviceis more appropriate for you. | I try to edit /etc/hosts through the echo IP Hostname >> /etc/hosts command, but it seems that ACI rewrites the file.
I've already tried putting it in dockerfile and also through the --command-line but none works. | How to edit /etc/hosts in Azure Container Instances? |
There is a very important difference between therootand thealiasdirectives. This difference exists in the way the path specified in therootor thealiasis processed.rootthelocationpart is appended torootpartfinal path =root+locationaliasthelocationpart is replaced by thealiaspartfinal path =aliasTo illustrate:Let's say we have the configlocation /static/ {
root /var/www/app/static/;
autoindex off;
}In this case the final path that Nginx will derive will be/var/www/app/static/staticThis is going to return404since there is nostatic/withinstatic/This is because the location part is appended to the path specified in theroot. Hence, withroot, the correct way islocation /static/ {
root /var/www/app/;
autoindex off;
}On the other hand, withalias, the location part getsdropped. So for the configlocation /static/ {
alias /var/www/app/static/;
autoindex off; ↑
} |
pay attention to this trailing slashthe final path will correctly be formed as/var/www/app/staticIn a way this makes sense. Thealiasjust lets you define a new path to represent an existing "real" path. The location part is that new path, and so it gets replaced with the real path. Think of it as a symlink.Root, on the other hand is not a new path, it contains some information that has to be collated with some other info to make the final path. And so, the location part is used, not dropped.The case for trailing slash inaliasThere is no definitive guideline about whether a trailing slash is mandatory perNginx documentation, but a common observation by people here and elsewhere seems to indicate that it is.A few more places have discussed this, not conclusively though.https://serverfault.com/questions/376162/how-can-i-create-a-location-in-nginx-that-works-with-and-without-a-trailing-slashttps://serverfault.com/questions/375602/why-is-my-nginx-alias-not-working | I need to serve my app through my app server at8080, and my static files from a directory without touching the app server.# app server on port 8080
# nginx listens on port 8123
server {
listen 8123;
access_log off;
location /static/ {
# root /var/www/app/static/;
alias /var/www/app/static/;
autoindex off;
}
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}Now, with this config, everything is working fine. Note that therootdirective is commented out.If I activaterootand deactivate thealias, it stops working. However, when I remove the trailing/static/fromroot, it starts working again.Can someone explain what's going on? | Nginx -- static file serving confusion with root & alias |
Errors are stored in the nginx log file. You can specify it in the root of the nginx configuration file:error_log /var/log/nginx/nginx_error.log warn;On Mac OS X withHomebrew, the log file was found by default at the following location:/usr/local/var/log/nginx | I'm using Django withFastCGI+ nginx. Where are the logs (errors) stored in this case? | Where can I find the error logs of nginx, using FastCGI and Django? |
DisclaimerMake sure there are no security implications for your use-case before running this.AnswerI had a similar issue getting Fedora 20, Nginx, Node.js, and Ghost (blog) to work. It turns out my issue was due toSELinux.This should solve the problem:setsebool -P httpd_can_network_connect 1DetailsI checked for errors in the SELinux logs:sudo cat /var/log/audit/audit.log | grep nginx | grep deniedAnd found that running the following commands fixed my issue:sudo cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M mynginx
sudo semodule -i mynginx.ppOption #2 (probably more secure)setsebool -P httpd_can_network_relay 1https://security.stackexchange.com/questions/152358/difference-between-selinux-booleans-httpd-can-network-relay-and-httpd-can-netReferenceshttp://blog.frag-gustav.de/2013/07/21/nginx-selinux-me-mad/https://wiki.gentoo.org/wiki/SELinux/Tutorials/Where_to_find_SELinux_permission_denial_detailshttp://wiki.gentoo.org/wiki/SELinux/Tutorials/Managing_network_port_labels | I am working with configuring Django project with Nginx and Gunicorn.While I am accessing my portgunicorn mysite.wsgi:application --bind=127.0.0.1:8001in Nginx server, I am getting the following error in my error log file;2014/05/30 11:59:42 [crit] 4075#0: *6 connect() to 127.0.0.1:8001 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream:"http://127.0.0.1:8001/", host: "localhost:8080"Below is the content of mynginx.conffile;server {
listen 8080;
server_name localhost;
access_log /var/log/nginx/example.log;
error_log /var/log/nginx/example.error.log;
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
}
}In the HTML page I am getting502 Bad Gateway.What mistake am I doing? | (13: Permission denied) while connecting to upstream:[nginx] |
If your configuration does not include aroot /some/absolute/path;statement, or it includes one that uses a relative path likeroot some/relative/path;, then the resulting path depends on compile-time options.Probably the only case that would allow you to make an educated guess as to what this means for you would be, if youdownloadedand compiled the source yourself. In that case, the paths would be relative to whatever--prefixwas used. If you didn't change it, it defaults to/usr/local/nginx. You can find the parameters nginx was compiled with vianginx -V, it lists--prefixas the first one.Sincetherootdirective defaults tohtml, this would, of course, result in/usr/local/nginx/htmlbeing the answer to your question.However, if you installed nginx in any other way, all bets are off. Your distribution might use entirely different default paths. Learning to figure out what kind of defaults your distribution of choice uses for things is another task entirely. | I have worked with Apache before, so I am aware that the default public web root is typically/var/www/.I recently started working with nginx, but I can't seem to find the default public web root.Where can I find the default public web root for nginx? | NGinx Default public www location? |
I had a similar error after php update. PHP fixed asecurity bugwhereohadrwpermission to the socket file.Open/etc/php5/fpm/pool.d/www.confor/etc/php/7.0/fpm/pool.d/www.conf, depending on your version.Uncomment all permission lines, like:listen.owner = www-data
listen.group = www-data
listen.mode = 0660Restart fpm -sudo service php5-fpm restartorsudo service php7.0-fpm restartNote: if your webserver runs as user other than www-data, you will need to update thewww.conffile accordingly | I update nginx to1.4.7and php to5.5.12, After that I got the502 error. Before I update everything works fine.nginx-error.log2014/05/03 13:27:41 [crit] 4202#0: *1 connect() to unix:/var/run/php5-fpm.sock failed (13: Permission denied) while connecting to upstream, client: xx.xxx.xx.xx, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "xx.xx.xx.xx"nginx.confuser www www;
worker_processes 1;
location / {
root /usr/home/user/public_html;
index index.php index.html index.htm;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/home/user/public_html$fastcgi_script_name;
include fastcgi_params;
} | nginx error connect to php5-fpm.sock failed (13: Permission denied) |
From theHTTP core module docs:Directives with the "=" prefix that match the query exactly. If found, searching stops.All remaining directives with conventional strings. If this match used the "^~" prefix, searching stops.Regular expressions, in the order they are defined in the configuration file.If #3 yielded a match, that result is used. Otherwise, the match from #2 is used.Example from the documentation:location = / {
# matches the query / only.
[ configuration A ]
}
location / {
# matches any query, since all queries begin with /, but regular
# expressions and any longer conventional blocks will be
# matched first.
[ configuration B ]
}
location /documents/ {
# matches any query beginning with /documents/ and continues searching,
# so regular expressions will be checked. This will be matched only if
# regular expressions don't find a match.
[ configuration C ]
}
location ^~ /images/ {
# matches any query beginning with /images/ and halts searching,
# so regular expressions will not be checked.
[ configuration D ]
}
location ~* \.(gif|jpg|jpeg)$ {
# matches any request ending in gif, jpg, or jpeg. However, all
# requests to the /images/ directory will be handled by
# Configuration D.
[ configuration E ]
}If it's still confusing,here's a longer explanation. | What order do location directives fire in? | Nginx location priority |
It looks like you are using a custom Kubernetes Cluster (usingminikube,kubeadmor the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only useNodePortor an Ingress Controller.With theIngress Controlleryou can setup a domain name which maps to your pod; you don't need to give your Service theLoadBalancertype if you use an Ingress Controller. | I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2,
I have deployed nginx with 3 replica, YAML file is below,apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80and now I want to expose its port 80 on port 30062 of node, for that I created a service below,kind: Service
apiVersion: v1
metadata:
name: nginx-ils-service
spec:
ports:
- name: http
port: 80
nodePort: 30062
selector:
app: nginx
type: LoadBalancerthis service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal. | Kubernetes service external ip pending |
I had the exact same problem - I was running my nginx in Virtualbox. I did not have caching turned on. But looks likesendfilewas set tooninnginx.confand that was causing the problem. @kolbyjack mentioned it above in the comments.When I turned offsendfile- it worked fine.This is because:Sendfile is used to ‘copy data between one file descriptor and another‘ and apparently has some real trouble when run in a virtual machine environment, or at least when run through Virtualbox. Turning this config off in nginx causes the static file to be served via a different method and your changes will be reflected immediately and without questionIt is related to this bug:https://www.virtualbox.org/ticket/12597 | I use nginx to as the front server, I have modified the CSS files, but nginx is still serving the old ones.I have tried to restart nginx, to no success and I have Googled, but not found a valid way to clear it.Some articles say we can just delete the cache directory:var/cache/nginx, but there is no such directory on my server.What should I do now? | How to clear the cache of nginx? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.