![]() ![]() serverA/video /mnt/serverA/video cifs uid=user1,gid=user1,rw,credentials=/a/credfileĐ 0 serverA/music /mnt/serverA/music cifs uid=user1,gid=user1,rw,credentials=/a/credfileĐ 0 In the other nodes, edit the file /etc/fstab to add the partitions, something like:ġ92.168.0.1:/global /global nfs defaults 0 0įor on demand filesystems, adding tomount to the options in fstab works for me. In the head node, in file /etc/exports add what you want to share, like: The head node is the node sharing the storage. Hi will describe my steps to configure it here in my cluster. In a college with a cluster of linux workstations using a common nfs server, running Clear Linux. I am thinking there should be a way to get this to work for people who want to set up a small lab See if once the client reboots you get the behavior back you expect about nfs mount points. Mount | grep -i and see what you could add to your manually maintained. What you could do, once you have an nfs mount point fully mounted type: What you’re describing is bad behaviorĪnd needs to be remedied somehow, either it’s a bug or there is some kind of acceptable work around Mount points would be empty until the nfs server was running. but if server is offline to begin with, the mount stays empty even after the server comes online.īing the way I remember setting up nfs being mounted on client when I had multiple linux machines, was that I had lines in the fstab that referenced the server name on the left and on the right mount point on the client and the file options defined. it automounts fine if the server is already online. Some pages refer to /etc/sysconfig/autofs and to set AUTOFS_OPTIONS="-debug" in this file, but it does not seem to be present in this image even after install of autofs (should I create it?!)Īlso I've seen references to /var/log/messages but it's not present either and I'm not sure how to/if possible to activate it.Currently have systemd auto/mount units setup, but not satisfactory. Testing with smbclient from docker works fine using the following command: smbclient //$DfsShare_Server/$DfsShare_Share -m SMB2 -U $DfsShare_Domain/$DfsShare_Username%$DfsShare_Password -c "get TestFiles\\10MB.dat 10MB.dat" Verified that all variables $DfsShare_Domain etc are ok - docker-compose fills them out and I can use them with smbclient I have the following output after startup: * WARNING: autofs is already starting The mount is not done, and autofs service seems to be failing. RUN apk -q add openrc samba-client autofs cifs-utils #samba-client::smbclient is used for debuggingĮNTRYPOINT ĭocker-entrypoint.sh: echo -e "username=$DfsShare_Domain/$DfsShare_Username\npassword=$DfsShare_Password" > /etc/.dfs-credentialsĮcho "$DfsShare_LocalDir -fstype=cifs,credentials=/etc/.dfs-credentials ://$DfsShare_Server/$DfsShare_Share" > /etc/auto.mydfsĮcho "/- /etc/auto.mydfs" > /etc/autofs/auto.master # -Here goes the dotnet build, removed for breivety. This is my dockerfile: FROM /dotnet/aspnet:6.0.15-alpine3.17 AS baseįROM /dotnet/sdk:6.0.407-alpine3.17 AS build I am having trouble figuring out why autofs won't install/start, it fails quietly/or I don't know where to look for logs or any output what so ever. ![]() I realise asp.net alpine might be a bit of a slim base image choice here but I have a strong preference for it for security reasons. My research conclusion is that autofs is the best way to do dynamic mount, but I am open to any other solution to do the same thing. It's to be used from a dotnet application that I am running in docker. I am trying to access a network share with specific credentials.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |