[SAC] OSGeo Server Parts Order

ZFS on Ubuntu 18 is just sudo apt install zfsutils-linux

Michael Smith
OSGeo Treasurer

On Aug 16, 2019, at 7:50 PM, Alex M <tech_dev@wildintellect.com> wrote:

On 8/16/19 15:21, Lance Albertson via RT wrote:
On Thu Aug 01 15:32:23 2019, lr@pcorp.us wrote:
No worries. Lots of vacations around here. Great.

We finally got the parts in the machines. How would you like the disks to be
configured in the HW RAID? RAID5, RAID6? RAID10? Also, I'd imagine you want this
machine to be moved onto a public network? Do you want us to install the OS, and
if so what?

Thanks-

Hmm, we plan to run Ubuntu 18.04 with ZFS so that we can easily do lxd
containers. Last I checked installing ZFS during install was kind of
annoying, so maybe a small slice off one drive as ext4 for the OS and
the rest ZFS post install. In which case the "Raiding" would be done via
ZFS in software?

I'll let someone chime in if that doesn't sound right.

Thanks,
Alex
_______________________________________________
Sac mailing list
Sac@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/sac

On Fri Aug 16 17:09:30 2019, michael.smith.erdc@gmail.com wrote:

ZFS on Ubuntu 18 is just sudo apt install zfsutils-linux

>> We finally got the parts in the machines. How would you like the disks to
>> be configured in the HW RAID? RAID5, RAID6? RAID10? Also, I'd imagine you
>> want this machine to be moved onto a public network? Do you want us to
>> install the OS, and if so what?
>>
>> Thanks-
>
> Hmm, we plan to run Ubuntu 18.04 with ZFS so that we can easily do lxd
> containers. Last I checked installing ZFS during install was kind of
> annoying, so maybe a small slice off one drive as ext4 for the OS and the
> rest ZFS post install. In which case the "Raiding" would be done via ZFS in
> software?
>
> I'll let someone chime in if that doesn't sound right.

So it sounds like you want the server setup with a JBOD (just a bunch of disks)
effectively making each disk visible for ZFS and no HW RAID. Unfortunately, the
disk controllers technically don't support JBOD, however we can work around it
by setting up each disk as a RAID0 device. This is what we do our Ceph storage
nodes and it works quite nicely. It also allows the disks to use the HW RAID
cache.

Do you want me to attempt to do a rootfs on ZFS or try and keep it simple and
just do a RAID1 with the first partition on two or more drives, and leave the
rest of the drives for ZFS?

--
Lance Albertson
Director
Oregon State University | Open Source Lab

Except you can't do that until after install. We're talking about /
being on ZFS, there's a old thread and maybe even a wiki page with
links. It was sufficiently complicated that we skipped on OSGeo7 for now
and put / on other disks.

Thanks,
Alex

On 8/16/19 5:09 PM, Michael Smith via RT wrote:

ZFS on Ubuntu 18 is just sudo apt install zfsutils-linux

Michael Smith
OSGeo Treasurer

On Aug 16, 2019, at 7:50 PM, Alex M <tech_dev@wildintellect.com> wrote:

On 8/16/19 15:21, Lance Albertson via RT wrote:
On Thu Aug 01 15:32:23 2019, lr@pcorp.us wrote:
No worries. Lots of vacations around here. Great.

We finally got the parts in the machines. How would you like the disks to be
configured in the HW RAID? RAID5, RAID6? RAID10? Also, I'd imagine you want this
machine to be moved onto a public network? Do you want us to install the OS, and
if so what?

Thanks-

Hmm, we plan to run Ubuntu 18.04 with ZFS so that we can easily do lxd
containers. Last I checked installing ZFS during install was kind of
annoying, so maybe a small slice off one drive as ext4 for the OS and
the rest ZFS post install. In which case the "Raiding" would be done via
ZFS in software?

I'll let someone chime in if that doesn't sound right.

Thanks,
Alex
_______________________________________________
Sac mailing list
Sac@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/sac

On Sat Aug 17 12:21:38 2019, ramereth wrote:

On Fri Aug 16 17:09:30 2019, michael.smith.erdc@gmail.com wrote:
> ZFS on Ubuntu 18 is just sudo apt install zfsutils-linux
>
> >> We finally got the parts in the machines. How would you like the disks to
> >> be configured in the HW RAID? RAID5, RAID6? RAID10? Also, I'd
> > imagine you
> >> want this machine to be moved onto a public network? Do you want us to
> >> install the OS, and if so what?
> >>
> >> Thanks-
> >
> > Hmm, we plan to run Ubuntu 18.04 with ZFS so that we can easily do lxd
> > containers. Last I checked installing ZFS during install was kind of
> > annoying, so maybe a small slice off one drive as ext4 for the OS and the
> > rest ZFS post install. In which case the "Raiding" would be done via ZFS
> > in software?
> >
> > I'll let someone chime in if that doesn't sound right.

So it sounds like you want the server setup with a JBOD (just a bunch of
disks) effectively making each disk visible for ZFS and no HW RAID.
Unfortunately, the disk controllers technically don't support JBOD, however we
can work around it by setting up each disk as a RAID0 device. This is what we
do our Ceph storage nodes and it works quite nicely. It also allows the disks
to use the HW RAID cache.

Do you want me to attempt to do a rootfs on ZFS or try and keep it simple and
just do a RAID1 with the first partition on two or more drives, and leave the
rest of the drives for ZFS?

Re-ping ^

--
Lance Albertson
Director
Oregon State University | Open Source Lab

Regina,

Do you have an opinion on this, since you setup OSGeo7, what would give
you the most similar layout?

Thanks,
Alex

On 8/26/19 09:11, Lance Albertson via RT wrote:

On Sat Aug 17 12:21:38 2019, ramereth wrote:

On Fri Aug 16 17:09:30 2019, michael.smith.erdc@gmail.com wrote:

ZFS on Ubuntu 18 is just sudo apt install zfsutils-linux

We finally got the parts in the machines. How would you like the disks to
be configured in the HW RAID? RAID5, RAID6? RAID10? Also, I'd

imagine you

want this machine to be moved onto a public network? Do you want us to
install the OS, and if so what?

Thanks-

Hmm, we plan to run Ubuntu 18.04 with ZFS so that we can easily do lxd
containers. Last I checked installing ZFS during install was kind of
annoying, so maybe a small slice off one drive as ext4 for the OS and the
rest ZFS post install. In which case the "Raiding" would be done via ZFS
in software?

I'll let someone chime in if that doesn't sound right.

So it sounds like you want the server setup with a JBOD (just a bunch of
disks) effectively making each disk visible for ZFS and no HW RAID.
Unfortunately, the disk controllers technically don't support JBOD, however we
can work around it by setting up each disk as a RAID0 device. This is what we
do our Ceph storage nodes and it works quite nicely. It also allows the disks
to use the HW RAID cache.

Do you want me to attempt to do a rootfs on ZFS or try and keep it simple and
just do a RAID1 with the first partition on two or more drives, and leave the
rest of the drives for ZFS?

Re-ping ^

Regina,

Do you have an opinion on this, since you setup OSGeo7, what would give
you the most similar layout?

Thanks,
Alex

On 8/26/19 09:11, Lance Albertson via RT wrote:

On Sat Aug 17 12:21:38 2019, ramereth wrote:

On Fri Aug 16 17:09:30 2019, michael.smith.erdc@gmail.com wrote:

ZFS on Ubuntu 18 is just sudo apt install zfsutils-linux

We finally got the parts in the machines. How would you like the disks to
be configured in the HW RAID? RAID5, RAID6? RAID10? Also, I'd

imagine you

want this machine to be moved onto a public network? Do you want us to
install the OS, and if so what?

Thanks-

Hmm, we plan to run Ubuntu 18.04 with ZFS so that we can easily do lxd
containers. Last I checked installing ZFS during install was kind of
annoying, so maybe a small slice off one drive as ext4 for the OS and the
rest ZFS post install. In which case the "Raiding" would be done via ZFS
in software?

I'll let someone chime in if that doesn't sound right.

So it sounds like you want the server setup with a JBOD (just a bunch of
disks) effectively making each disk visible for ZFS and no HW RAID.
Unfortunately, the disk controllers technically don't support JBOD, however we
can work around it by setting up each disk as a RAID0 device. This is what we
do our Ceph storage nodes and it works quite nicely. It also allows the disks
to use the HW RAID cache.

Do you want me to attempt to do a rootfs on ZFS or try and keep it simple and
just do a RAID1 with the first partition on two or more drives, and leave the
rest of the drives for ZFS?

Re-ping ^

Sorry I guess I missed all this discussion.

For the OSGeo7 we had 500GB allocated for root, and the rest of the disks were set aside for ZFS.

I think a similar setup will work. Note the 500 GB is useful for exporting images so I'd like at least that much separate from our lxd ZFS pool. It doesn't need to be the root drive and doesn't really need mirroring either.

I think going with RAID0 is probably the best.
Also last time we did this -- you just inserted the Ubuntu 18.04 disk in the CD tray and we took care of the installation.
Not sure if that is doable or if because of the way the RAID is setup that's not doable.

Thanks,
Regina

-----Original Message-----
From: Alex M [mailto:tech_dev@wildintellect.com]
Sent: Monday, August 26, 2019 12:19 PM
To: Regina Obe <lr@pcorp.us>
Cc: support@osuosl.org; sac@lists.osgeo.org; rootmail-students@osuosl.org
Subject: Re: [support.osuosl.org #30704] OSGeo Server Parts Order

Regina,

Do you have an opinion on this, since you setup OSGeo7, what would give you the most similar layout?

Thanks,
Alex

On 8/26/19 09:11, Lance Albertson via RT wrote:

On Sat Aug 17 12:21:38 2019, ramereth wrote:

On Fri Aug 16 17:09:30 2019, michael.smith.erdc@gmail.com wrote:

ZFS on Ubuntu 18 is just sudo apt install zfsutils-linux

We finally got the parts in the machines. How would you like the
disks to be configured in the HW RAID? RAID5, RAID6? RAID10? Also,
I'd

imagine you

want this machine to be moved onto a public network? Do you want
us to install the OS, and if so what?

Thanks-

Hmm, we plan to run Ubuntu 18.04 with ZFS so that we can easily do
lxd containers. Last I checked installing ZFS during install was
kind of annoying, so maybe a small slice off one drive as ext4 for
the OS and the rest ZFS post install. In which case the "Raiding"
would be done via ZFS in software?

I'll let someone chime in if that doesn't sound right.

So it sounds like you want the server setup with a JBOD (just a bunch
of
disks) effectively making each disk visible for ZFS and no HW RAID.
Unfortunately, the disk controllers technically don't support JBOD,
however we can work around it by setting up each disk as a RAID0
device. This is what we do our Ceph storage nodes and it works quite
nicely. It also allows the disks to use the HW RAID cache.

Do you want me to attempt to do a rootfs on ZFS or try and keep it
simple and just do a RAID1 with the first partition on two or more
drives, and leave the rest of the drives for ZFS?

Re-ping ^

Sorry I guess I missed all this discussion.

For the OSGeo7 we had 500GB allocated for root, and the rest of the disks were set aside for ZFS.

I think a similar setup will work. Note the 500 GB is useful for exporting images so I'd like at least that much separate from our lxd ZFS pool. It doesn't need to be the root drive and doesn't really need mirroring either.

I think going with RAID0 is probably the best.
Also last time we did this -- you just inserted the Ubuntu 18.04 disk in the CD tray and we took care of the installation.
Not sure if that is doable or if because of the way the RAID is setup that's not doable.

Thanks,
Regina

-----Original Message-----
From: Alex M [mailto:tech_dev@wildintellect.com]
Sent: Monday, August 26, 2019 12:19 PM
To: Regina Obe <lr@pcorp.us>
Cc: support@osuosl.org; sac@lists.osgeo.org; rootmail-students@osuosl.org
Subject: Re: [support.osuosl.org #30704] OSGeo Server Parts Order

Regina,

Do you have an opinion on this, since you setup OSGeo7, what would give you the most similar layout?

Thanks,
Alex

On 8/26/19 09:11, Lance Albertson via RT wrote:

On Sat Aug 17 12:21:38 2019, ramereth wrote:

On Fri Aug 16 17:09:30 2019, michael.smith.erdc@gmail.com wrote:

ZFS on Ubuntu 18 is just sudo apt install zfsutils-linux

We finally got the parts in the machines. How would you like the
disks to be configured in the HW RAID? RAID5, RAID6? RAID10? Also,
I'd

imagine you

want this machine to be moved onto a public network? Do you want
us to install the OS, and if so what?

Thanks-

Hmm, we plan to run Ubuntu 18.04 with ZFS so that we can easily do
lxd containers. Last I checked installing ZFS during install was
kind of annoying, so maybe a small slice off one drive as ext4 for
the OS and the rest ZFS post install. In which case the "Raiding"
would be done via ZFS in software?

I'll let someone chime in if that doesn't sound right.

So it sounds like you want the server setup with a JBOD (just a bunch
of
disks) effectively making each disk visible for ZFS and no HW RAID.
Unfortunately, the disk controllers technically don't support JBOD,
however we can work around it by setting up each disk as a RAID0
device. This is what we do our Ceph storage nodes and it works quite
nicely. It also allows the disks to use the HW RAID cache.

Do you want me to attempt to do a rootfs on ZFS or try and keep it
simple and just do a RAID1 with the first partition on two or more
drives, and leave the rest of the drives for ZFS?

Re-ping ^

On Mon Aug 26 09:39:48 2019, lr@pcorp.us wrote:

Sorry I guess I missed all this discussion.

For the OSGeo7 we had 500GB allocated for root, and the rest of the disks were
set aside for ZFS.

I think a similar setup will work. Note the 500 GB is useful for exporting
images so I'd like at least that much separate from our lxd ZFS pool. It
doesn't need to be the root drive and doesn't really need mirroring either.

I think going with RAID0 is probably the best. Also last time we did this --
you just inserted the Ubuntu 18.04 disk in the CD tray and we took care of the
installation. Not sure if that is doable or if because of the way the RAID is
setup that's not doable.

Based on what I mention below, how do you want the physical disks setup with
regards to the HW RAID? Should the rootfs, should that have any RAID and if so
what? You mentioned RAID0 above, I wasn't sure for which volume. Also, I'm not
sure how OSGeo7 was configured as far as HW RAID and disks.

Thanks-

>> So it sounds like you want the server setup with a JBOD (just a bunch of
>> disks) effectively making each disk visible for ZFS and no HW RAID.
>> Unfortunately, the disk controllers technically don't support JBOD, however
>> we can work around it by setting up each disk as a RAID0 device. This is
>> what we do our Ceph storage nodes and it works quite nicely. It also allows
>> the disks to use the HW RAID cache.
>>
>> Do you want me to attempt to do a rootfs on ZFS or try and keep it simple
>> and just do a RAID1 with the first partition on two or more drives, and
>> leave the rest of the drives for ZFS?

--
Lance Albertson
Director
Oregon State University | Open Source Lab

Lance,

By RAID0 I meant by what you suggested below:

>> Unfortunately, the disk controllers technically don't support JBOD,
>> however we can work around it by setting up each disk as a RAID0
>> device. This is what we do our Ceph storage nodes and it works
>> quite nicely. It also allows the disks to use the HW RAID cache.

As for the OS that won't be under ZFS will a regular extfs partition, so maybe a RAID1 (is that what you were suggesting)?
I suppose having some level of redundancy for the root partition would be good since we won't have that under ZFS.

Hope that clarifies things,
Regina

-----Original Message-----
From: Sac [mailto:sac-bounces@lists.osgeo.org] On Behalf Of Lance Albertson via RT
Sent: Monday, August 26, 2019 12:51 PM
To: michael.smith.erdc@gmail.com; tech@wildintellect.com
Cc: sac@lists.osgeo.org; rootmail-students@osuosl.org; tech_dev@wildintellect.com
Subject: [SAC] [support.osuosl.org #30704] OSGeo Server Parts Order

On Mon Aug 26 09:39:48 2019, lr@pcorp.us wrote:

Sorry I guess I missed all this discussion.

For the OSGeo7 we had 500GB allocated for root, and the rest of the
disks were set aside for ZFS.

I think a similar setup will work. Note the 500 GB is useful for
exporting images so I'd like at least that much separate from our lxd
ZFS pool. It doesn't need to be the root drive and doesn't really need mirroring either.

I think going with RAID0 is probably the best. Also last time we did
this -- you just inserted the Ubuntu 18.04 disk in the CD tray and we
took care of the installation. Not sure if that is doable or if
because of the way the RAID is setup that's not doable.

Based on what I mention below, how do you want the physical disks setup with regards to the HW RAID? Should the rootfs, should that have any RAID and if so what? You mentioned RAID0 above, I wasn't sure for which volume. Also, I'm not sure how OSGeo7 was configured as far as HW RAID and disks.

Thanks-

>> So it sounds like you want the server setup with a JBOD (just a
>> bunch of
>> disks) effectively making each disk visible for ZFS and no HW RAID.
>> Unfortunately, the disk controllers technically don't support JBOD,
>> however we can work around it by setting up each disk as a RAID0
>> device. This is what we do our Ceph storage nodes and it works
>> quite nicely. It also allows the disks to use the HW RAID cache.
>>
>> Do you want me to attempt to do a rootfs on ZFS or try and keep it
>> simple and just do a RAID1 with the first partition on two or more
>> drives, and leave the rest of the drives for ZFS?

--
Lance Albertson
Director
Oregon State University | Open Source Lab _______________________________________________
Sac mailing list
Sac@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/sac

Lance,

By RAID0 I meant by what you suggested below:

>> Unfortunately, the disk controllers technically don't support JBOD,
>> however we can work around it by setting up each disk as a RAID0
>> device. This is what we do our Ceph storage nodes and it works
>> quite nicely. It also allows the disks to use the HW RAID cache.

As for the OS that won't be under ZFS will a regular extfs partition, so maybe a RAID1 (is that what you were suggesting)?
I suppose having some level of redundancy for the root partition would be good since we won't have that under ZFS.

Hope that clarifies things,
Regina

-----Original Message-----
From: Sac [mailto:sac-bounces@lists.osgeo.org] On Behalf Of Lance Albertson via RT
Sent: Monday, August 26, 2019 12:51 PM
To: michael.smith.erdc@gmail.com; tech@wildintellect.com
Cc: sac@lists.osgeo.org; rootmail-students@osuosl.org; tech_dev@wildintellect.com
Subject: [SAC] [support.osuosl.org #30704] OSGeo Server Parts Order

On Mon Aug 26 09:39:48 2019, lr@pcorp.us wrote:

Sorry I guess I missed all this discussion.

For the OSGeo7 we had 500GB allocated for root, and the rest of the
disks were set aside for ZFS.

I think a similar setup will work. Note the 500 GB is useful for
exporting images so I'd like at least that much separate from our lxd
ZFS pool. It doesn't need to be the root drive and doesn't really need mirroring either.

I think going with RAID0 is probably the best. Also last time we did
this -- you just inserted the Ubuntu 18.04 disk in the CD tray and we
took care of the installation. Not sure if that is doable or if
because of the way the RAID is setup that's not doable.

Based on what I mention below, how do you want the physical disks setup with regards to the HW RAID? Should the rootfs, should that have any RAID and if so what? You mentioned RAID0 above, I wasn't sure for which volume. Also, I'm not sure how OSGeo7 was configured as far as HW RAID and disks.

Thanks-

>> So it sounds like you want the server setup with a JBOD (just a
>> bunch of
>> disks) effectively making each disk visible for ZFS and no HW RAID.
>> Unfortunately, the disk controllers technically don't support JBOD,
>> however we can work around it by setting up each disk as a RAID0
>> device. This is what we do our Ceph storage nodes and it works
>> quite nicely. It also allows the disks to use the HW RAID cache.
>>
>> Do you want me to attempt to do a rootfs on ZFS or try and keep it
>> simple and just do a RAID1 with the first partition on two or more
>> drives, and leave the rest of the drives for ZFS?

--
Lance Albertson
Director
Oregon State University | Open Source Lab _______________________________________________
Sac mailing list
Sac@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/sac

On Mon Aug 26 09:57:44 2019, lr@pcorp.us wrote:

Lance,

By RAID0 I meant by what you suggested below:

> >> Unfortunately, the disk controllers technically don't support JBOD,
> >> however we can work around it by setting up each disk as a RAID0 device.
> >> This is what we do our Ceph storage nodes and it works quite nicely. It
> >> also allows the disks to use the HW RAID cache.

As for the OS that won't be under ZFS will a regular extfs partition, so
maybe a RAID1 (is that what you were suggesting)? I suppose having some level
of redundancy for the root partition would be good since we won't have that
under ZFS.

Hope that clarifies things,

Yes that does! I'll get that going later today or tomorrow. What ssh keys should
I put on the system so you can access it?

Thanks-

--
Lance Albertson
Director
Oregon State University | Open Source Lab

On Mon Aug 26 09:57:44 2019, lr@pcorp.us wrote:
Lance,

By RAID0 I meant by what you suggested below:

> >> Unfortunately, the disk controllers technically don't support
> >> JBOD, however we can work around it by setting up each disk as a RAID0 device.
> >> This is what we do our Ceph storage nodes and it works quite
> >> nicely. It also allows the disks to use the HW RAID cache.

As for the OS that won't be under ZFS will a regular extfs partition,
so maybe a RAID1 (is that what you were suggesting)? I suppose having
some level of redundancy for the root partition would be good since we
won't have that under ZFS.

Hope that clarifies things,

Yes that does! I'll get that going later today or tomorrow. What ssh keys should I put on the system so you can access it?

Thanks-

--
I'll send you the list in a separate email.

Thanks,
Regina

On Mon Aug 26 09:57:44 2019, lr@pcorp.us wrote:
Lance,

By RAID0 I meant by what you suggested below:

> >> Unfortunately, the disk controllers technically don't support
> >> JBOD, however we can work around it by setting up each disk as a RAID0 device.
> >> This is what we do our Ceph storage nodes and it works quite
> >> nicely. It also allows the disks to use the HW RAID cache.

As for the OS that won't be under ZFS will a regular extfs partition,
so maybe a RAID1 (is that what you were suggesting)? I suppose having
some level of redundancy for the root partition would be good since we
won't have that under ZFS.

Hope that clarifies things,

Yes that does! I'll get that going later today or tomorrow. What ssh keys should I put on the system so you can access it?

Thanks-

--
I'll send you the list in a separate email.

Thanks,
Regina

Here is list of keys and you can setup an account under tech_dev as you have done in past and put these under that one with ability for that account to sudo

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3NJWLY4HBj/VXEqTHlCFnbybN7A20N06fazd2DEgwDpbh647s+FOocJLcZZRnROhDDFXO5t6208BWULZDa3JE1V6fBuyGjfiWvB3h/lLFVbWhf884ZHbnmGtZPJ9rsMYxEErNkxndUQttL1rtTWPKiQeTx3Sj+rkIL3lKZMLsH1KpqgvwhK87Yqzu43/4V8K2qMoB+ltizd3M8orvN78ZzqV7I06p/IdoZVU/j6qOhbpxMv4NIXZ98LvpBmozXfyC+2k1p2AYKX5cpuwofKC3xidOMc+baa/eMg6qOHZ3jL+rn1Kay77B47AQPQQOiSHutgOxV0jXIXN8tr1u0oFjw== wildintellect
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAmf79bn/i3jeP9IuFZ0BXGTJundqVLW5gRUfnG2cPvmDtGRnvKZ/cEoxDbuvh9vyQhV/wn3Y1ACgBc+lwhJUXmjZDq2ft4iL0GfOpjyEwT+QohSyShy9lijCJ2iNdxG1ixZokXReYkyldSgjSpMzTbtaAZCx0rMk9zY/3lQM+IR3OTTv8HUmzECFwgA6fcjKnzq98Ng2fdf/1tcvz73YfVZlo92Q0pSsGFhU4ytsI7CqVF9i5JdFlUToOB+KXQpsizQ3XeDgp+kxcOGHiXHbZb3h5w7yy7J7+cwrWOrZxCEMfHITy7NQcRj3bbK2B+pPVG87/oHTzAGgDbcZYq6uzLQ== robe2048
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXfHGzxlzPA9gjIpbHPjCQ2SHvWve+QgficecsCwjACiZvOLrh71XalGGGpbxANkzs7amZjbqpvVd6L3tQJAZK8pT5ljVSV1bx3pX4HxxcL9CpAMB+SfmdRjlTxJ5uvmDNZkHKZl17PglVRYaS+fBbGAJe4JCNhMBzN0szmI3YWHjKlaJtnlAK6NVPiTyrcxQoNkb+XsDq53TBUapfK78JrJssI/iyuqZZ0nkrCgQL92UACRl9a7Wlq0fswsASi8/X/5iKKQu3Qfxo7yKIi7EbxvXn6/rUnyoykM7SwKEXiK06ABZxw8/lN9qcZhhO8YEo+zairIyX4S5DfF+5kGn9 strk@cdb
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCiWeZZYicY7aEgHCocJF1GNt/DqMmqrS6eDTwnrMy4oAuDfbciKg+7QWib/Br/v0+wNOUSeDuoUyCRIZyxVaniZilPDj+dJ3oO1HiHovEM5Ug/ye+V/gLO/275hBHlBAgXPC9muORj6SU/YHl8/IkckJs4YekIuwn9z6BxgU3TzC4p9ikZkr/VUQmuSbDW9S2qqjXBh1pKv4eIY9NrGlnPf0yCELFBEwSADVjimKLYWx6lxD+SAjL1giYFmE2JAYONWi7oPOlXQ8ZC/lYREVGqdJjAjyYrqqP+V9px89bQE0ivNPC5W5fnVamRaG2w/hSUbbrvmtVpONxpzDuu6aJF strk@liz
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA27/h4XK16Z8vPTtaSUFlliZjytoJE+0K9PNH57kCDmp1/s0qy2UUQs3KQIVLsBZPBfDdShcAjDUhCmyo+xSi6xsQ9cndxoLe8Zoc8yQc5v41iebCbAwA3eHiukZV5YzFvw8T6gNFHhf9XSc2A9cHcFmtuXBH8llaTcBGlMeBqJ8= jef
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID7svyAj1KPb9lE+9zpLr/+FnGXBh4LbpNejEjtD61H2 chrisgio@localhost

Thanks,
Regina

-----Original Message-----
From: Sac [mailto:sac-bounces@lists.osgeo.org] On Behalf Of Lance Albertson via RT
Sent: Monday, August 26, 2019 1:03 PM
To: michael.smith.erdc@gmail.com; tech@wildintellect.com
Cc: sac@lists.osgeo.org; rootmail-students@osuosl.org; tech_dev@wildintellect.com
Subject: [SAC] [support.osuosl.org #30704] OSGeo Server Parts Order

On Mon Aug 26 09:57:44 2019, lr@pcorp.us wrote:

Lance,

By RAID0 I meant by what you suggested below:

> >> Unfortunately, the disk controllers technically don't support
> >> JBOD, however we can work around it by setting up each disk as a RAID0 device.
> >> This is what we do our Ceph storage nodes and it works quite
> >> nicely. It also allows the disks to use the HW RAID cache.

As for the OS that won't be under ZFS will a regular extfs partition,
so maybe a RAID1 (is that what you were suggesting)? I suppose having
some level of redundancy for the root partition would be good since we
won't have that under ZFS.

Hope that clarifies things,

Yes that does! I'll get that going later today or tomorrow. What ssh keys should I put on the system so you can access it?

Thanks-

--
Lance Albertson
Director
Oregon State University | Open Source Lab _______________________________________________
Sac mailing list
Sac@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/sac

On Mon Aug 26 10:03:08 2019, ramereth wrote:

Yes that does! I'll get that going later today or tomorrow. What ssh keys
should I put on the system so you can access it?

This system has been finally setup. You should be able to access it using the
tech_dev user by going to osgeo4.osgeo.osuosl.org. I've disabled password logins
via ssh but have added the ssh keys you sent to the tech_dev user. The password
for the user is 'oaDOGqlWNRMvgqXY9XCc' which you should change once you login.

Here's how I setup the system:

root@osgeo4:~# lsblk -i
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
|-sda1 8:1 0 953M 0 part
| `-md0 9:0 0 952M 0 raid1 /boot
`-sda2 8:2 0 46.6G 0 part
  `-md1 9:1 0 46.5G 0 raid1
    |-lvm-root 253:0 0 37.3G 0 lvm /
    `-lvm-swap 253:1 0 7.5G 0 lvm [SWAP]
sdb 8:16 0 1.8T 0 disk
|-sdb1 8:17 0 953M 0 part
| `-md0 9:0 0 952M 0 raid1 /boot
`-sdb2 8:18 0 46.6G 0 part
  `-md1 9:1 0 46.5G 0 raid1
    |-lvm-root 253:0 0 37.3G 0 lvm /
    `-lvm-swap 253:1 0 7.5G 0 lvm [SWAP]
sdc 8:32 0 1.8T 0 disk
sdd 8:48 0 1.8T 0 disk
sde 8:64 0 1.8T 0 disk
sdf 8:80 0 1.8T 0 disk

Both sda and sdb have 1.9TB left to be allocated via a partition that you could
use for ZFS. Please let me know when you're able to login and able to gain root
access.

Thanks-

--
Lance Albertson
Director
Oregon State University | Open Source Lab

Both sda and sdb have 1.9TB left to be allocated via a partition that you could
use for ZFS. Please let me know when you’re able to login and able to gain root
access.

Thanks-

Lance Albertson
Director
Oregon State University | Open Source Lab

Thanks Lance – I was able to log in, change password and also gain root.

Thanks,

Regina

Both sda and sdb have 1.9TB left to be allocated via a partition that you could
use for ZFS. Please let me know when you're able to login and able to gain root
access.

Thanks-

Lance Albertson
Director
Oregon State University | Open Source Lab

Thanks Lance – I was able to log in, change password and also gain root.

Thanks,

Regina

This system has been finally setup. You should be able to access it using the
tech_dev user by going to osgeo4.osgeo.osuosl.org.

Thanks again for the setup. Can we have KVM access as well.

In osgeo7, we were given - https://osgeo7.osuosl.oob/

Which allowed us to access via the KVM interface to do hard reboots. This are able to access using the OpenVPN connection of OSUOSL.

Can we have similar for osgeo4?

Thanks,
Regina

On Thu Aug 29 14:08:43 2019, lr@pcorp.us wrote:

> This system has been finally setup. You should be able to access it using
> the tech_dev user by going to osgeo4.osgeo.osuosl.org.

Thanks again for the setup. Can we have KVM access as well.

In osgeo7, we were given - https://osgeo7.osuosl.oob/

Which allowed us to access via the KVM interface to do hard reboots. This are
able to access using the OpenVPN connection of OSUOSL.

Can we have similar for osgeo4?

Sure thing however I need a way to get you the password. It seems the osuadmin
account on osgeo4 is no longer working with the key I expected. I can either
send you it via a gpg encrypted file (would need a key ID for that), or if you
can add our ssh key to the osgeo4 host and then I can copy a file with the
password there.

Let me know which way works.

Thanks-

--
Lance Albertson
Director
Oregon State University | Open Source Lab