The package rpms/python-llama-cpp-python.git has added or updated architecture specific content in its spec file (ExclusiveArch/ExcludeArch or %ifarch/%ifnarch) in commit(s): https://src.fedoraproject.org/cgit/rpms/python-llama-cpp-python.git/commit/?....
Change: +ExclusiveArch: x86_64 aarch64
Thanks.
Full change: ============
commit e1f05a0bb7cd1d6f4a7ff9a5a5d4f858f0c45657 Author: Tomas Tomecek ttomecek@redhat.com Date: Mon Jun 24 11:07:21 2024 +0200
build the package in f40
``` tests/test_llama_grammar.py::test_grammar_from_string from_string grammar: leaf ::= [.] node ::= leaf | [(] node node [)] root ::= node
Fatal Python error: Illegal instruction ``` I had to disable the test.
Signed-off-by: Tomas Tomecek ttomecek@redhat.com
diff --git a/python-llama-cpp-python.spec b/python-llama-cpp-python.spec index c37be13..c89b9ea 100644 --- a/python-llama-cpp-python.spec +++ b/python-llama-cpp-python.spec @@ -60,7 +60,7 @@ OpenAI compatible web server %if %{with test} %check # most test_llama.py tests utilize model ggml-vocab-llama.gguf from vendored llama.cpp -%pytest -vs tests/test_llama.py::test_llama_cpp_version tests/test_llama.py::test_logits_to_logprobs tests/test_llama_speculative.py tests/test_llama_chat_format.py tests/test_llama_grammar.py +%pytest -vs tests/test_llama.py::test_llama_cpp_version tests/test_llama.py::test_logits_to_logprobs tests/test_llama_speculative.py tests/test_llama_chat_format.py %endif
%install
commit 986aa14338409270565c21fb733f6dea96ce92f2 Author: Python Maint python-maint@redhat.com Date: Wed Jun 19 12:00:33 2024 +0200
Rebuilt for Python 3.13
commit 0af914b68a698066f21e454f963946469c2e16e9 Author: Tomas Tomecek ttomecek@redhat.com Date: Thu May 23 09:42:50 2024 +0200
build with tests on
Signed-off-by: Tomas Tomecek ttomecek@redhat.com
diff --git a/python-llama-cpp-python.spec b/python-llama-cpp-python.spec index d3b166c..c37be13 100644 --- a/python-llama-cpp-python.spec +++ b/python-llama-cpp-python.spec @@ -13,7 +13,7 @@ Source: %{url}/archive/v%{version}/%{pypi_name}-%{version}.tar.gz Patch1: 0001-don-t-build-llama.cpp-and-llava.patch Patch2: 0002-search-for-libllama-so-in-usr-lib64.patch
-%bcond_with test +%bcond_without test
# this is what llama-cpp is on # and this library is by default installed in /usr/lib64/python3.12/site-packages/llama_cpp/__init__.py
commit 517934c42b71ac732fa064e08116ca1e5f0347f5 Author: Tomas Tomecek ttomecek@redhat.com Date: Thu May 23 09:41:57 2024 +0200
use %%pyproject_save_files -L
the macro complains that it can't find the license file, we set it explicitly anyway
Signed-off-by: Tomas Tomecek ttomecek@redhat.com
diff --git a/python-llama-cpp-python.spec b/python-llama-cpp-python.spec index 87454be..d3b166c 100644 --- a/python-llama-cpp-python.spec +++ b/python-llama-cpp-python.spec @@ -65,7 +65,7 @@ OpenAI compatible web server
%install %pyproject_install -%pyproject_save_files -l llama_cpp +%pyproject_save_files -l llama_cpp -L
%files -n python3-%{pypi_name} -f %{pyproject_files} %license LICENSE.md
commit 2080a7061abadb6190406a6a348fe80c3acf9af4 Author: Mohammadreza Hendiani man2dev@fedoraproject.org Date: Sun May 19 19:59:46 2024 +0330
update source to llama-cpp-python-0.2.75.tar.gz
diff --git a/.gitignore b/.gitignore index 44b4b7a..9481877 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1 @@ -/llama-cpp-python-0.2.60.tar.gz +/llama-cpp-python-0.2.75.tar.gz diff --git a/python-llama-cpp-python.spec b/python-llama-cpp-python.spec index 1966afe..87454be 100644 --- a/python-llama-cpp-python.spec +++ b/python-llama-cpp-python.spec @@ -1,5 +1,5 @@ %global pypi_name llama-cpp-python -%global pypi_version 0.2.60 +%global pypi_version 0.2.75 # it's all python code %global debug_package %{nil}
diff --git a/sources b/sources index f20aef4..43150cb 100644 --- a/sources +++ b/sources @@ -1 +1 @@ -SHA512 (llama-cpp-python-0.2.60.tar.gz) = a556054e67c04838b07fa4c768766899a27e6ea0edc55f2fa0d77f6a4dc4c3f1f44cf530f68ca2df0e26b22b6ce059a2a652db9586273e908d35b544372d9e7f +SHA512 (llama-cpp-python-0.2.75.tar.gz) = 25c9d36ae3c0795bcc5e39eab9e0804f56ba74fbe91ffe75d8b7273eef072c29da2426e5782425646c30e39e87ef3313f8220fac073c0b292dc97f3cca87a756
commit d0c056195ee9abec126dd8790d64aae3a29cd688 Author: Tomas Tomecek ttomecek@redhat.com Date: Wed Apr 17 10:13:01 2024 +0200
initial import, version 0.2.60
Signed-off-by: Tomas Tomecek ttomecek@redhat.com
diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..44b4b7a --- /dev/null +++ b/.gitignore @@ -0,0 +1 @@ +/llama-cpp-python-0.2.60.tar.gz diff --git a/0001-don-t-build-llama.cpp-and-llava.patch b/0001-don-t-build-llama.cpp-and-llava.patch new file mode 100644 index 0000000..e826a76 --- /dev/null +++ b/0001-don-t-build-llama.cpp-and-llava.patch @@ -0,0 +1,28 @@ +From 854fa8a9114778e9c386201b8fa2ca413dfdd2cd Mon Sep 17 00:00:00 2001 +From: Tomas Tomecek ttomecek@redhat.com +Date: Tue, 9 Apr 2024 13:08:58 +0200 +Subject: [PATCH] don't build llama.cpp and llava + +Signed-off-by: Tomas Tomecek ttomecek@redhat.com +--- + CMakeLists.txt | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/CMakeLists.txt b/CMakeLists.txt +index 70f9b99..e48cd3c 100644 +--- a/CMakeLists.txt ++++ b/CMakeLists.txt +@@ -2,8 +2,8 @@ cmake_minimum_required(VERSION 3.21) + + project(llama_cpp) + +-option(LLAMA_BUILD "Build llama.cpp shared library and install alongside python package" ON) +-option(LLAVA_BUILD "Build llava shared library and install alongside python package" ON) ++option(LLAMA_BUILD "Build llama.cpp shared library and install alongside python package" OFF) ++option(LLAVA_BUILD "Build llava shared library and install alongside python package" OFF) + + if (LLAMA_BUILD) + set(BUILD_SHARED_LIBS "On") +-- +2.44.0 + diff --git a/0002-search-for-libllama-so-in-usr-lib64.patch b/0002-search-for-libllama-so-in-usr-lib64.patch new file mode 100644 index 0000000..7278ac2 --- /dev/null +++ b/0002-search-for-libllama-so-in-usr-lib64.patch @@ -0,0 +1,25 @@ +From 6ff51f64a0472e4bc83a4a64d170a1791ce88e1a Mon Sep 17 00:00:00 2001 +From: Tomas Tomecek ttomecek@redhat.com +Date: Thu, 11 Apr 2024 09:47:29 +0200 +Subject: [PATCH] search for libllama.so in /usr/lib64 + +Signed-off-by: Tomas Tomecek ttomecek@redhat.com +--- + llama_cpp/llama_cpp.py | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/llama_cpp/llama_cpp.py b/llama_cpp/llama_cpp.py +index accc02c..731abe5 100644 +--- a/llama_cpp/llama_cpp.py ++++ b/llama_cpp/llama_cpp.py +@@ -31,6 +31,7 @@ def _load_shared_library(lib_base_name: str): + if sys.platform.startswith("linux"): + _lib_paths += [ + _base_path / f"lib{lib_base_name}.so", ++ pathlib.Path(f"/usr/lib64/lib{lib_base_name}.so") + ] + elif sys.platform == "darwin": + _lib_paths += [ +-- +2.44.0 + diff --git a/python-llama-cpp-python.spec b/python-llama-cpp-python.spec new file mode 100644 index 0000000..1966afe --- /dev/null +++ b/python-llama-cpp-python.spec @@ -0,0 +1,76 @@ +%global pypi_name llama-cpp-python +%global pypi_version 0.2.60 +# it's all python code +%global debug_package %{nil} + +Name: python-%{pypi_name} +Version: %{pypi_version} +Release: %autorelease +License: MIT +Summary: Simple Python bindings for @ggerganov's llama.cpp library +URL: https://github.com/abetlen/llama-cpp-python +Source: %{url}/archive/v%{version}/%{pypi_name}-%{version}.tar.gz +Patch1: 0001-don-t-build-llama.cpp-and-llava.patch +Patch2: 0002-search-for-libllama-so-in-usr-lib64.patch + +%bcond_with test + +# this is what llama-cpp is on +# and this library is by default installed in /usr/lib64/python3.12/site-packages/llama_cpp/__init__.py +ExclusiveArch: x86_64 aarch64 + +BuildRequires: git-core +BuildRequires: gcc +BuildRequires: gcc-c++ +BuildRequires: ninja-build +BuildRequires: python3-devel +BuildRequires: llama-cpp-devel +%if %{with test} +BuildRequires: python3-pytest +BuildRequires: python3-scipy +%endif + +%generate_buildrequires +%pyproject_buildrequires + +%description +%{pypi_name} provides: +Low-level access to C API via `ctypes` interface. +High-level Python API for text completion. +OpenAI compatible web server + +%package -n python3-%{pypi_name} +Summary: %{summary} +# -devel has the unversioned libllama.so +Requires: llama-cpp-devel + +%description -n python3-%{pypi_name} +%{pypi_name} provides: +Low-level access to C API via `ctypes` interface. +High-level Python API for text completion. +OpenAI compatible web server + + +%prep +%autosetup -p1 -n %{pypi_name}-%{version} -Sgit + +%build +%pyproject_wheel + +%if %{with test} +%check +# most test_llama.py tests utilize model ggml-vocab-llama.gguf from vendored llama.cpp +%pytest -vs tests/test_llama.py::test_llama_cpp_version tests/test_llama.py::test_logits_to_logprobs tests/test_llama_speculative.py tests/test_llama_chat_format.py tests/test_llama_grammar.py +%endif + +%install +%pyproject_install +%pyproject_save_files -l llama_cpp + +%files -n python3-%{pypi_name} -f %{pyproject_files} +%license LICENSE.md +%doc README.md + +%changelog +%autochangelog + diff --git a/sources b/sources new file mode 100644 index 0000000..f20aef4 --- /dev/null +++ b/sources @@ -0,0 +1 @@ +SHA512 (llama-cpp-python-0.2.60.tar.gz) = a556054e67c04838b07fa4c768766899a27e6ea0edc55f2fa0d77f6a4dc4c3f1f44cf530f68ca2df0e26b22b6ce059a2a652db9586273e908d35b544372d9e7f