Hi again !
Shai DB a écrit :
> another question
> I notice that 1.2 dont have the AFR on its source
> how can i use/install it anyway ?
> i saw 1.3-pre has it..
> is the 1.3-pre OK for production ?
> thanks
>
I had forgotten this point ! :)
Yes, 1.3-pre4 archive is stable enough for production, but you can also
use the tla repository with the branch 2.4 which is stable enough (to
me) to be used in production.
Just note that the 1.3 stable release will be based on the 2.5
mainbranch and will include self-heal feature (and many more !)
Cheers,
Sebastien LELIEVRE
address@hidden Services to ISP
TBS-internet http://www.TBS-internet.com
> I need it for replication (to have 2 copies of data in case of crash)
>
>
> On 6/26/07, *Sebastien LELIEVRE* <address@hidden
> <mailto:address@hidden>> wrote:
>
> Hi,
>
> I just wanted to stress this :
>
> Shai a écrit :
> > Hello, we are testing glusterfs 1.2 and I have few questions -
>
> 1.2 doesn't bring "self-heal" with it, so keep in mind that if a
drives
> crashes, you would have to sync your new drive "manually" with the
> others.
>
>
> so to just copy all data to the replaced disk from his afr 'pair' ?
>
>
> BUT, 1.3 is going to correct this, and this is good :)
>
> That's all I had to add
>
> Cheers,
>
> Sebastien LELIEVRE
> address@hidden
> <mailto:address@hidden> Services to ISP
> TBS-internet http://www.TBS-internet.com
>
> Krishna Srinivas a écrit :
> > As of now you need to restart glusterfs if there is any change
> > in the config spec file. However in future versions you wont need
> > to remount (This is in our road map)
> >
> > On 6/25/07, Shai DB <address@hidden <mailto:address@hidden>>
> wrote:
> >> thanks for the answer
> >> this seems easy and neat to setup
> >>
> >> another question is, if i add 2 more nodes to the gang
> >> how can i setup all the clients with the new configuration,
without
> >> need to
> >> 'remount' the glusterfs ?
> >>
> >> Thanks
> >>
> >>
> >> On 6/25/07, Krishna Srinivas <address@hidden
> <mailto:address@hidden>> wrote:
> >> >
> >> > On 6/25/07, Shai DB < address@hidden
> <mailto:address@hidden>> wrote:
> >> > > Hello, we are testing glusterfs 1.2 and I have few questions
-
> >> > >
> >> > >
> >> > > 1. we are going to store millions of small jpg files that
> will be
> >> read
> >> > by
> >> > > webserver - is glusterfs good solution for this ?
> >> >
> >> > Yes, definitely.
> >> >
> >> > > 2. we are going to run both server+clients on each node
> together with
> >> > apache
> >> > >
> >> > > 3. replicate *:2
> >> > >
> >> > > the way i think doing replicate is defining on each server 2
> >> volumes and
> >> > > using AFR:
> >> > >
> >> > > server1: a1, a2
> >> > > server2: b1, b2
> >> > > server3: c1, c2
> >> > > server4: d1, d2
> >> > > server5: e1, e2
> >> > >
> >> > > afr1: a1+b2
> >> > > afr2: b1+c2
> >> > > afr3: c1+d2
> >> > > afr4: d1+e2
> >> > > afr5: e1+a2
> >> > >
> >> > > and then unify = afr1+afr2+afr3+afr4+afr5 with replicate
option
> >> > >
> >> > > is this correct way ?
> >> > > and what to do on the future when we add more nodes ? when
> >> changing the
> >> > afr
> >> > > (adding and changing the couples) making glusterfs
> >> > > redistribute the files the new way ?
> >> >
> >> > Yes this is the right way. If you add one more server f, the
one
> >> solution
> >> > is to move contents of a2 to f2 and clean up a2 and have it as
> >> following:
> >> >
> >> > afr5: e1 + f2
> >> > afr6: f1 + a2
> >> >
> >> > Cant think of an easier solution.
> >> >
> >> > But if we assume that you will always add 2 servers when you
> want to
> >> add,
> >> > we can have the setup in following way:
> >> > afr1: a1 + b2
> >> > afr2: b1 + a2
> >> > afr3: c1 + d2
> >> > afr4: d1 + c2
> >> > afr5: e1 + f2
> >> > afr6: f1 + e2
> >> >
> >> > Now when you add a pair of servers to this (g, h):
> >> > afr7: f1 + h2
> >> > afr8: h1 +f2
> >> >
> >> > Which is very easy. But you will have to add 2 servers
everytime.
> >> > The advantage is that it is easier to visualize the setup and
add
> >> > new nodes.
> >> >
> >> > Thinking further, if we assume that you will replicate all the
> files
> >> > twice (option replicate *:2) you can have the following setup:
> >> > afr1: a + b
> >> > afr2: c + d
> >> > afr3: e + f
> >> >
> >> > This is a very easy setup. It is simple to add a fresh pair
> (afr4: g
> >> +h)
> >> >
> >> > You can have whatever setup you want depending on your
> >> > convinience and requirement.
> >> >
> >> > >
> >> > > 4. what happens when a hard drive goes down and replaces, the
> cluster
> >> > also
> >> > > redistribute the files ?
> >> >
> >> > When a hard drive is replaced, missing files will be replicated
> from
> >> the
> >> > AFR's other child.
> >> >
> >> > Regards
> >> > Krishna
> >> >
> >> > -------
> >> >
> >> > The best quote ever : '
> >> >