On Monday, December 11, 2023 4:59:45 AM EST Tim Flink wrote:
On 12/8/23 08:34, Steve Grubb wrote:
> On Friday, December 8, 2023 12:41:59 AM EST Jun Aruga (he / him) wrote:
>
>> Congratulations for the PyTorch package!
>>
https://src.fedoraproject.org/rpms/python-torch
>>
>> I hope someone will announce this great achievement to the Fedora
>> community too, and update the following page too.
>>
https://fedoraproject.org/wiki/SIGs/PyTorch/packagingStatus
>
>
> Yes, this is nice that we have pytorch in Fedora. Looking at the
> specfile...
>
> USE_CUDA=OFF
> USE_ROCM=OFF
>
> Which does not align with:
>
> %description
> PyTorch is a Python package that provides two high-level features:
> * Tensor computation (like NumPy) with strong GPU acceleration
>
>
> GPU acceleration? Also,
GPU acceleration is not enabled for the pytorch packages and that is
intentional, for now. pytorch has a mess of third party dependencies which
are managed upstream using git subrepos that point to external
dependencies that may or may not be easy to package for Fedora.
Yes, I am familiar with the orignal LUA version which I had to build locally.
From the beginning, our plan has been to get pytorch packaged for CPU
only
first and add accelerator support as we can. Perhaps the description for
pytorch needs to be changed but our intent is to enable ROCm support for
F40.
If you are doing CPU only, you really should enable a BLAS backend. Fedora
has flexiblas available.
I don't have the exact list of packages remaining before we can
enable ROCm
support for pytorch in front of me but I believe that we're down into the
single digits and the biggest hurdle at the moment is ROCm's miopen due to
some incompatibility with Fedora's llvm or hipcc.
> USE_OPENMP=OFF
>
> So, no threading? What about at least enabling BLAS? Maybe it is by
> default. Not seeing it in the specfile. Without a CUDA version of this,
> it can't be used the way it was meant to be. We still need to use pip
> install to get an accelerated version:
I'm not familiar with OpenMP or what might be required there, Tom (cc'd)
would know more on that exact detail.
GCC should natively support it - unless it uses something brand new GCC
hasn't adopted yet.
I doubt that a CUDA version of pytorch will ever be packagable for
the
Fedora repos - the licensing on CUDA would have to change before that
happens and while it's possible, it doesn't seem likely in the foreseeable
future.
It would be great to enable support for Intel accelerators but that
is a
different project for a different day. ROCm is the only accelerator
support that we have scoped out at this point.
> pip install torch
> python3
>
>>>> import torch
>>>> torch.__config__.show()
>
>
> The config listed there should be compared with the config in the spec
> file to get as close to the expected feature set as possible so that
> people can just switch. This is a positive step and I would love to
> switch one day.
In general, there are two reasons why a torch feature is not enabled in the
Fedora package:
1. The license of a dependency for that feature is incompatible with
Fedora
2. One or more dependencies are not yet packaged for Fedora
I think you can add OpenMP and BLAS support easily. That should be a small
win.
-Steve
Obviously, features that fall into (1) are very difficult, if not
impossible for us to work around. Features that fall into (2) will likely
need more time - the first build for PyTorch was about a week ago and we
still have work to do.
We are working to get the pytorch packages in Fedora to be as
complete as
we can make them. If anyone is interested in helping, please join us on
discourse (#ai-ml-sig) or Matrix (#ai-ml:fedoraproject.org).
Tim
> Best Regards,
> -Steve
>
> --
> _______________________________________________
> devel mailing list -- devel(a)lists.fedoraproject.org
> To unsubscribe send an email to devel-leave(a)lists.fedoraproject.org
> Fedora Code of Conduct:
>
https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List