HI,
Development has effectively ended. For years Red Hat drove development for its RHGS product, but with the EOL of RHGS at the end of 2024 [1], and the disbanding of RHGS engineering at Red Hat, no development is being done. The last update (11.1) was on 6 Nov., 2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction.
Over in the CentOS world, the CentOS Storage SIG members have decided not to build GlusterFS RPMs for Stream 10.
Therefore I intend to retire the GlusterFS package in Fedora 42, unless someone else would like to step in and take over as package owner.
[1] https://access.redhat.com/support/policy/updates/rhs/
On Tue, Jun 25, 2024 at 04:44:26PM -0400, Kaleb Keithley wrote:
HI,
Development has effectively ended. For years Red Hat drove development for its RHGS product, but with the EOL of RHGS at the end of 2024 [1], and the disbanding of RHGS engineering at Red Hat, no development is being done. The last update (11.1) was on 6 Nov., 2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction.
Over in the CentOS world, the CentOS Storage SIG members have decided not to build GlusterFS RPMs for Stream 10.
Therefore I intend to retire the GlusterFS package in Fedora 42, unless someone else would like to step in and take over as package owner.
We had a bit of discussion about the implications for virt, since qemu and libvirt are prominent users of glusterfs-api-devel & glusterfs-devel in Fedora. I think the summary is that we don't want to drop it upstream right now, but will go along with whatever the Linux distros do.
So if Fedora 42 drops gluster then we'll go along with that. And the same will apply in other distros like Debian.
Eventually once no distros have it we'll look at dropping it upstream.
(After reading your message above I dropped glusterfs support in libguestfs.)
Rich.
On Tue, Jun 25, 2024 at 4:44 PM Kaleb Keithley kkeithle@redhat.com wrote:
HI,
Development has effectively ended. For years Red Hat drove development for its RHGS product, but with the EOL of RHGS at the end of 2024 [1], and the disbanding of RHGS engineering at Red Hat, no development is being done. The last update (11.1) was on 6 Nov., 2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction.
Over in the CentOS world, the CentOS Storage SIG members have decided not to build GlusterFS RPMs for Stream 10.
Therefore I intend to retire the GlusterFS package in Fedora 42, unless someone else would like to step in and take over as package owner.
[1] https://access.redhat.com/support/policy/updates/rhs/
--
Kaleb
On 1/3/25 4:39 AM, Kaleb Keithley wrote:
On Tue, Jun 25, 2024 at 4:44 PM Kaleb Keithley <kkeithle@redhat.com mailto:kkeithle@redhat.com> wrote:
HI, Development has effectively ended. For years Red Hat drove development for its RHGS product, but with the EOL of RHGS at the end of 2024 [1], and the disbanding of RHGS engineering at Red Hat, no development is being done. The last update (11.1) was on 6 Nov., 2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction. Over in the CentOS world, the CentOS Storage SIG members have decided not to build GlusterFS RPMs for Stream 10. Therefore I intend to retire the GlusterFS package in Fedora 42, unless someone else would like to step in and take over as package owner. [1] https://access.redhat.com/support/policy/updates/rhs/ <https:// access.redhat.com/support/policy/updates/rhs/>
Is there some sort of alternative or is this type of storage just not interesting anymore?
On Fri, Jan 3, 2025 at 5:20 PM Samuel Sieb samuel@sieb.net wrote:
On 1/3/25 4:39 AM, Kaleb Keithley wrote:
On Tue, Jun 25, 2024 at 4:44 PM Kaleb Keithley <kkeithle@redhat.com mailto:kkeithle@redhat.com> wrote:
HI, Development has effectively ended. For years Red Hat drove development for its RHGS product, but with the EOL of RHGS at the end of 2024 [1], and the disbanding of RHGS engineering at Red Hat, no development is being done. The last update (11.1) was on 6 Nov., 2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction. Over in the CentOS world, the CentOS Storage SIG members have decided not to build GlusterFS RPMs for Stream 10. Therefore I intend to retire the GlusterFS package in Fedora 42, unless someone else would like to step in and take over as package owner. [1] https://access.redhat.com/support/policy/updates/rhs/ <https:// access.redhat.com/support/policy/updates/rhs/>
Is there some sort of alternative or is this type of storage just not interesting anymore?
Red Hat's involvement ending isn't the same as the software no longer being developed.
https://github.com/gluster/glusterfs/commits/devel/
On 1/3/25 6:08 PM, Neal Gompa wrote:
On Fri, Jan 3, 2025 at 5:20 PM Samuel Sieb samuel@sieb.net wrote:
On 1/3/25 4:39 AM, Kaleb Keithley wrote:
On Tue, Jun 25, 2024 at 4:44 PM Kaleb Keithley <kkeithle@redhat.com mailto:kkeithle@redhat.com> wrote:
HI, Development has effectively ended. For years Red Hat drove development for its RHGS product, but with the EOL of RHGS at the end of 2024 [1], and the disbanding of RHGS engineering at Red Hat, no development is being done. The last update (11.1) was on 6 Nov., 2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction. Over in the CentOS world, the CentOS Storage SIG members have decided not to build GlusterFS RPMs for Stream 10. Therefore I intend to retire the GlusterFS package in Fedora 42, unless someone else would like to step in and take over as package owner. [1] https://access.redhat.com/support/policy/updates/rhs/ <https:// access.redhat.com/support/policy/updates/rhs/>
Is there some sort of alternative or is this type of storage just not interesting anymore?
Red Hat's involvement ending isn't the same as the software no longer being developed.
2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction.
This suggested otherwise.
Am Sa., 4. Jan. 2025 um 03:30 Uhr schrieb Samuel Sieb samuel@sieb.net:
On 1/3/25 6:08 PM, Neal Gompa wrote:
On Fri, Jan 3, 2025 at 5:20 PM Samuel Sieb samuel@sieb.net wrote:
On 1/3/25 4:39 AM, Kaleb Keithley wrote:
On Tue, Jun 25, 2024 at 4:44 PM Kaleb Keithley <kkeithle@redhat.com mailto:kkeithle@redhat.com> wrote:
...
2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction.
This suggested otherwise.
Commit counts in devel branch:
2006: 148 2007: 0 2008: 0 2009: 1401 2010: 1271 2011: 1217 2012: 1243 2013: 986 2014: 1169 2015: 1787 2016: 1186 2017: 3175 2018: 1260 2019: 777 2020: 420 2021: 352 2022: 317 2023: 78 2024: 31
Total: 16818 --author=@redhat.com: 8825
There's always more than one truth in numbers ;-)
Cheers, Michael
On Sat, Jan 4, 2025, at 2:25 PM, Michael J Gruber wrote:
Am Sa., 4. Jan. 2025 um 03:30 Uhr schrieb Samuel Sieb samuel@sieb.net:
On 1/3/25 6:08 PM, Neal Gompa wrote:
On Fri, Jan 3, 2025 at 5:20 PM Samuel Sieb samuel@sieb.net wrote:
On 1/3/25 4:39 AM, Kaleb Keithley wrote:
On Tue, Jun 25, 2024 at 4:44 PM Kaleb Keithley <kkeithle@redhat.com mailto:kkeithle@redhat.com> wrote:
...
2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear to be forthcoming from any direction.
This suggested otherwise.
Commit counts in devel branch:
2006: 148 2007: 0 2008: 0 2009: 1401 2010: 1271 2011: 1217 2012: 1243 2013: 986 2014: 1169 2015: 1787 2016: 1186 2017: 3175 2018: 1260 2019: 777 2020: 420 2021: 352 2022: 317 2023: 78 2024: 31
Total: 16818 --author=@redhat.com: 8825
There's always more than one truth in numbers ;-)
Cheers, Michael --
Happy to maintain it in Fedora while there is activity upstream.
Are you a packager? What's your FAS ID?
On Sat, Jan 4, 2025 at 6:53 AM Benson Muite benson_muite@emailplus.org wrote:
Happy to maintain it in Fedora while there is activity upstream.
On Sat, Jan 4, 2025, at 9:28 PM, Kaleb Keithley wrote:
Are you a packager?
Yes
What's your FAS ID?
Fed500
On Sat, Jan 4, 2025 at 6:53 AM Benson Muite benson_muite@emailplus.org wrote:
Happy to maintain it in Fedora while there is activity upstream.
--
Kaleb
It's all yours. Have fun.
On Sun, Jan 5, 2025 at 6:07 AM Benson Muite benson_muite@emailplus.org wrote:
On Sat, Jan 4, 2025, at 9:28 PM, Kaleb Keithley wrote:
Are you a packager?
Yes
What's your FAS ID?
Fed500
On Sat, Jan 4, 2025 at 6:53 AM Benson Muite benson_muite@emailplus.org wrote:
Happy to maintain it in Fedora while there is activity upstream.
--
Kaleb
On Sun, Jan 05, 2025 at 02:07:11PM +0300, Benson Muite wrote:
On Sat, Jan 4, 2025, at 9:28 PM, Kaleb Keithley wrote:
Are you a packager?
Yes
What's your FAS ID?
Fed500
Note that the entire virt stack (just about) depends on gluster, so please coordinate with us if you push any breaking change. We may need to rebuild quite a few dependent packages. Easiest way is to email 'qemu-maintainers@fedoraproject.org'.
Although I suppose since upstream development is dead, maybe that won't be very likely ...
Rich.
On Thu, 9 Jan 2025 at 12:07, Richard W.M. Jones rjones@redhat.com wrote:
On Sun, Jan 05, 2025 at 02:07:11PM +0300, Benson Muite wrote:
On Sat, Jan 4, 2025, at 9:28 PM, Kaleb Keithley wrote:
Are you a packager?
Yes
What's your FAS ID?
Fed500
Note that the entire virt stack (just about) depends on gluster, so please coordinate with us if you push any breaking change. We may need to rebuild quite a few dependent packages. Easiest way is to email 'qemu-maintainers@fedoraproject.org'.
Although I suppose since upstream development is dead, maybe that won't be very likely ...
While it doesn't affect things from a build perspective, but could gluster be possibly be made recommended instead of required at the libvirt-daemon-driver-storage level?
Just to be clear, Benson Muite has agreed to take over as admin/maintainer of glusterfs and it's not going to be retired now.
Somewhat orthogonal to that, upstream development has slowed to a crawl and there hasn't been a release in over a year; the outlook is pretty grim in my opinion. Take that for what it's worth.
On Thu, Jan 9, 2025 at 9:01 AM Peter Robinson pbrobinson@gmail.com wrote:
On Thu, 9 Jan 2025 at 12:07, Richard W.M. Jones rjones@redhat.com wrote:
On Sun, Jan 05, 2025 at 02:07:11PM +0300, Benson Muite wrote:
On Sat, Jan 4, 2025, at 9:28 PM, Kaleb Keithley wrote:
Are you a packager?
Yes
What's your FAS ID?
Fed500
Note that the entire virt stack (just about) depends on gluster, so please coordinate with us if you push any breaking change. We may need to rebuild quite a few dependent packages. Easiest way is to email 'qemu-maintainers@fedoraproject.org'.
Although I suppose since upstream development is dead, maybe that won't be very likely ...
While it doesn't affect things from a build perspective, but could gluster be possibly be made recommended instead of required at the libvirt-daemon-driver-storage level? -- _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
On Sat, 4 Jan 2025, 11:26 Michael J Gruber, mjg@fedoraproject.org wrote:
Am Sa., 4. Jan. 2025 um 03:30 Uhr schrieb Samuel Sieb samuel@sieb.net:
On 1/3/25 6:08 PM, Neal Gompa wrote:
On Fri, Jan 3, 2025 at 5:20 PM Samuel Sieb samuel@sieb.net wrote:
On 1/3/25 4:39 AM, Kaleb Keithley wrote:
On Tue, Jun 25, 2024 at 4:44 PM Kaleb Keithley <kkeithle@redhat.com mailto:kkeithle@redhat.com> wrote:
...
2023. Little or no, development is taking place in the greater Gluster community (such as it is), and no new updates appear
to be
forthcoming from any direction.
This suggested otherwise.
Commit counts in devel branch:
2006: 148 2007: 0 2008: 0 2009: 1401 2010: 1271 2011: 1217 2012: 1243 2013: 986 2014: 1169 2015: 1787 2016: 1186 2017: 3175 2018: 1260 2019: 777 2020: 420 2021: 352 2022: 317 2023: 78 2024: 31
Total: 16818 --author=@redhat.com: 8825
There's always more than one truth in numbers ;-)
Yes, and I think 78 commits in 2023 and 31 in 2024 compared to prior years for a project the size of gluster is the real story, I'd also be interested to see what the breakdown of those commits in the last few years, is there any real development or just maintenance for security and the like.
Peter
On 4/1/25 09:20, Samuel Sieb wrote:
Is there some sort of alternative or is this type of storage just not interesting anymore?
Wasn't Ceph distributed file system supposed to replace Gluster? I could be totally wrong here.
-- Ian Laurie FAS: nixuser | IRC: nixuser TZ: Australia/Sydney
Ian Laurie via devel wrote on Mon, Jan 06, 2025 at 08:57:09AM +1100:
On 4/1/25 09:20, Samuel Sieb wrote:
Is there some sort of alternative or is this type of storage just not interesting anymore?
Wasn't Ceph distributed file system supposed to replace Gluster? I could be totally wrong here.
Ceph cannot be used for dual-node (inherently dangerous) setups that gluster is/was rather good at. (In that case, the only alternative I'm aware of would be OCFS over a drbd device, export by NFS over a failover IP if necessary, but the last time I looked was over 15 years ago... If 3+ nodes are possible then ceph is probably going to perform much better)