[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gluster-devel] large glusterfs memory usage on mainline-2.5 patch 237?
From: |
Dale Dude |
Subject: |
[Gluster-devel] large glusterfs memory usage on mainline-2.5 patch 237? |
Date: |
Tue, 26 Jun 2007 21:17:34 -0400 |
User-agent: |
Thunderbird 2.0.0.5pre (Windows/20070625) |
After rsync is finished finding what files to update the glusterfs
process shoots to 500megz (has grown to 620megz since i started this email).
RSYNC CMD:
# rsync -av --progress nfs:/volumes /volumes
receiving file list ...
1612811 files to consider
Library/Keychains/
...I CHECKED PS HERE...
# ps aux|fgrep gluster
root 26844 3.4 0.1 21756 3300 ? Rsl 11:06 1:20
[glusterfsd]
root 26862 4.2 24.6 528808 508108 ? Ssl 11:06 1:34
[glusterfs]
Before rsync starts doing any updates the memory is low. Notice rsync
hasnt started updating yet:
# rsync -av --progress nfs:/volumes /volumes
receiving file list ...
1612802 files to consider
# ps aux|fgrep gluster
root 26844 0.3 0.1 21628 3156 ? Ssl 11:06 0:06
[glusterfsd]
root 26862 0.2 0.3 27584 6916 ? Ssl 11:06 0:04
[glusterfs]
==========================
glusterfs-client.vol:
volume server1
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 127.0.0.1 # IP address of the remote brick
option remote-subvolume volumenamespace
end-volume
volume server1vol1
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 127.0.0.1 # IP address of the remote brick
option remote-subvolume clusterfs1
end-volume
volume server1vol2
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 127.0.0.1 # IP address of the remote brick
option remote-subvolume clusterfs2
end-volume
###################
volume bricks
type cluster/unify
option namespace server1
option readdir-force-success on # ignore failed mounts
subvolumes server1vol1 server1vol2
option scheduler rr
option rr.limits.min-free-disk 5 #%
end-volume
#volume writebehind #writebehind improves write performance a lot
#type performance/write-behind
#option aggregate-size 131072 # in bytes
#subvolumes bricks
#end-volume
================================
glusterfs-server.vol:
volume clusterfs1
type storage/posix
option directory /volume1
end-volume
#volume clusterfs1
#type performance/io-threads
#option thread-count 8
#subvolumes volume1
#end-volume
#######
volume clusterfs2
type storage/posix
option directory /volume2
end-volume
#######
volume volumenamespace
type storage/posix
option directory /volume.namespace
end-volume
###
volume clusterfs
type protocol/server
option transport-type tcp/server
subvolumes clusterfs1 clusterfs2 volumenamespace
option auth.ip.clusterfs1.allow *
option auth.ip.clusterfs2.allow *
option auth.ip.volumenamespace.allow *
end-volume
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [Gluster-devel] large glusterfs memory usage on mainline-2.5 patch 237?,
Dale Dude <=