Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VolumeDriver.Mount: error mounting data: exit status 1 #28

Open
minzak opened this issue Jan 16, 2020 · 17 comments
Open

VolumeDriver.Mount: error mounting data: exit status 1 #28

minzak opened this issue Jan 16, 2020 · 17 comments

Comments

@minzak
Copy link

minzak commented Jan 16, 2020

I have worked gluster cluster. sync also worked.

# gluster volume info
 
Volume Name: gluster-fs
Type: Replicate
Volume ID: be633a3e-555f-44d0-8ec8-07e77a440f47
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster0:/gluster/brick
Brick2: gluster1:/gluster/brick
Brick3: gluster2:/gluster/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

And I have simple docker-compose file

version: "3.4"
services:

  mysql:
    image: mysql
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
    ports:
      - "3306:3306"
    networks:
      - default
    volumes:
      - data:/var/lib/mysql

volumes:
  data:
    driver: glusterfs
    name: "data"

I can't run because always got the same error like this

VolumeDriver.Mount: error mounting data: exit status 1

And this

[2020-01-16 10:40:27.671991] I [MSGID: 114057] [client-handshake.c:1376:select_server_supported_programs] 0-gluster-fs-client-2: Using Program GlusterFS 4.x v1, Num (1298437), Version (400)
[2020-01-16 10:40:27.672188] I [MSGID: 114057] [client-handshake.c:1376:select_server_supported_programs] 0-gluster-fs-client-1: Using Program GlusterFS 4.x v1, Num (1298437), Version (400)
[2020-01-16 10:40:27.672754] I [MSGID: 114046] [client-handshake.c:1106:client_setvolume_cbk] 0-gluster-fs-client-2: Connected to gluster-fs-client-2, attached to remote volume '/gluster/brick'.
[2020-01-16 10:40:27.672778] I [MSGID: 108002] [afr-common.c:5648:afr_notify] 0-gluster-fs-replicate-0: Client-quorum is met
[2020-01-16 10:40:27.673116] I [MSGID: 114046] [client-handshake.c:1106:client_setvolume_cbk] 0-gluster-fs-client-1: Connected to gluster-fs-client-1, attached to remote volume '/gluster/brick'.
[2020-01-16 10:40:27.675667] I [fuse-bridge.c:5166:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.27
[2020-01-16 10:40:27.675689] I [fuse-bridge.c:5777:fuse_graph_sync] 0-fuse: switched to graph 0
[2020-01-16 10:40:27.677743] I [MSGID: 108031] [afr-common.c:2581:afr_local_discovery_cbk] 0-gluster-fs-replicate-0: selecting local read_child gluster-fs-client-0
[2020-01-16 10:44:32.157093] E [fuse-bridge.c:227:check_and_dump_fuse_W] (--> /lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x12f)[0x7f3055f6ac9f] (--> /usr/lib/x86_64-linux-gnu/glusterfs/7.1/xlator/mount/fuse.so(+0x8e32)[0x7f3054522e32] (--> /usr/lib/x86_64-l
inux-gnu/glusterfs/7.1/xlator/mount/fuse.so(+0x9fe8)[0x7f3054523fe8] (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3)[0x7f3055adcfa3] (--> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f30557244cf] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or
 directory
[2020-01-16 10:44:34.118949] E [fuse-bridge.c:227:check_and_dump_fuse_W] (--> /lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x12f)[0x7f3055f6ac9f] (--> /usr/lib/x86_64-linux-gnu/glusterfs/7.1/xlator/mount/fuse.so(+0x8e32)[0x7f3054522e32] (--> /usr/lib/x86_64-l
inux-gnu/glusterfs/7.1/xlator/mount/fuse.so(+0x9fe8)[0x7f3054523fe8] (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3)[0x7f3055adcfa3] (--> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f30557244cf] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or
 directory

I'm try use
name: "gfs/data"
name: "gluster-fs/data"

Also i try recreate gluster volume with other names like gfs
Also i manually create folder data
And no lucky.
It is worked?

@thijsvanloef
Copy link

We have the same issue running on ubuntu 18.04

@timnis
Copy link

timnis commented Jan 27, 2020

Same problem. I have Fedora 31 + gluster 7.1 and glusterfs works from host, but with docker I get same error

@timnis
Copy link

timnis commented Jan 28, 2020

Maybe problem is that this plugin works only with glusterfs version 3.x, see #18 last comments

@mukerjee
Copy link

mukerjee commented Feb 3, 2020

I ran into this as well. It was because the latest tag for trajano/glusterfs-volume-plugin on dockerhub is quite old. Using trajano/glusterfs-volume-plugin:v2.0.3 seems to work.

@timnis
Copy link

timnis commented Feb 4, 2020

@mukerjee what version of the glusterfs you use on your glusterfs server?

@mukerjee
Copy link

mukerjee commented Feb 4, 2020

@timnis I'm using glusterfs 7.2

Seems to work fine. That said, I ran into issues trying to get docker to create a subdirectory on the fly for new containers. It seems like glusterfs has return code 0 in this situation but doesn't actually do anything.

I switched to mounting glusterfs on the host and then using https://github.com/MatchbookLab/local-persist to point each container at a new subdirectory below the glusterfs mount point. This gives me the behavior i want: new subdirectory per container created on the fly with the right permissions for that container.

@kavian72
Copy link

I was seeing "exit status 1" as well, with a new glusterfs 7.3 volume, until I saw this issue. Switching to v2.0.3 fixed it for me.

@doouz
Copy link

doouz commented Mar 6, 2020

@kavian72 and what docker version you are using? can you share your install steps for this plugin? Im using too the 7.3 version of gluster and docker 19.03.7, but is not working, instead of use the gluster volume, is using the normal hard disk and mounting the volume for the container there.
worker1-plugin

@kavian72
Copy link

kavian72 commented Mar 6, 2020

@donatocl I'm using docker-ce 19.03.6, so very similar to yours. For the installation, I used the documented install steps, with just one change: instead of referring to trajano/glusterfs-volume-plugin (which translates to trajano/glusterfs-volume-plugin:latest), I specifically referenced trajano/glusterfs-volume-plugin:v2.0.3. For everything else, I just followed the documented steps.

However, I don't understand the screenshot you're showing...that's obviously from your host machine, but the Docker volume would only be visible from inside the guest container. Also, the host machine you're using appears to be your Gluster host itself...if you're creating your containers right on the Gluster host machine, then you don't really need the Docker volume plugin, i.e. you can just use a bind mount to the locally-mounted Gluster volume. The Gluster volume driver is needed when you are running your container on another machine.

So, overall, I'm very confused by what you're showing us. 😄

@akshatgit
Copy link

Getting the same problem.. has anyone solved it?
docker: Error response from daemon: VolumeDriver.Mount: error mounting my-vol: exit status 1.

Docker version:

akshat.s@host1:~$ docker version
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:48:53 2018
OS/Arch: linux/amd64
Experimental: false

Glusterfs version:

glusterfs 7.3
Repository revision: git://git.gluster.org/glust...
Copyright (c) 2006-2016 Red Hat, Inc. <https: www.gluster.org=""/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

@Cu4rach4
Copy link

trajano/glusterfs-volume-plugin:v2.0.3

Thanks...this solve the mount problem...

3 nodes - Ubuntu 20.04
glusterfs-server 7.x
docker 19.03.8

@jhonny-oliveira
Copy link

Is there any chance you fix the issue with the "latest" tag?

@kavian72
Copy link

I think the safer action would be to remove the "latest" tag altogether, and just update the documentation to list the actual latest value as each new release is published.

Using a tag such as this is not a very good practice, because it hides what version you're using. When you download the "latest" tag to your machine, how do you keep track of what you have vs. what updates are available? A tag named "latest" may sound easier to use, but it's obfuscating important information...so in the long term, it's actually making things harder.

I know the use of "latest" is a common practice among the Docker community, and what I'm saying is probably an unpopular viewpoint. But "popular" is not the same as "correct". :-)

@trajano
Copy link
Owner

trajano commented May 31, 2020

@kavian72 I agree, perhaps @ximon18 port will handle it.

@jhonny-oliveira
Copy link

@kavian72, that is the safest solution. Having properly updated documentation can also save some frustration from a lot of people. Nevertheless, I do not understand why "latest" is not delivering the same as 2.0.3... 2.0.3 is the latest, right?

@trajano and all other contributors, thank you very much for this great plugin!

@trajano
Copy link
Owner

trajano commented Jun 1, 2020

Not sure, I stopped working on this when I reformatted my PC and couldn't get the build scripts to work locally. The CI builds do not work as expected since this creates 3 images rather than one so I used to do them on my machine and never got around putting the CI process on something like Travis.

@ximon18
Copy link

ximon18 commented Jun 2, 2020

I'll try and get to publishing my port as soon as I can, some other priorities and the consequences of the global situation got in the way until now. FYI for my own purposes I did already publish a Docker image containing the tini patch but there's no published source code for it nor any CI to build it, both of which I want to setup as a proper fork. It's also only the Gluster volume plugin, I'm not sure if I can or should take over the other backend volume plugins as I don't use them, know anything about them nor have a setup to test them. The plugin that I am using is here: https://hub.docker.com/r/ximoneighteen/glusterfs-volume-plugin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests