Commit Graph

13 Commits

Author SHA1 Message Date
Hans Kristian Rosbach
f411580733 Clean up internal crc32 function handling.
Mark crc32_c and crc32_braid functions as internal, and remove prefix.
Reorder contents of generic_functions, and remove Z_INTERNAL hints from declarations.
Add test/benchmark output to indicate whether Chorba is used.
2025-02-18 23:59:16 +01:00
Sam Russell
b33ba962c2 implement chorba algorithm 2025-02-15 14:31:50 +01:00
Hans Kristian Rosbach
bf05e882b8 Continued cleanup of old UNALIGNED_OK checks
- Remove obsolete checks
- Fix checks that are inconsistent
- Stop compiling compare256/longest_match variants that never gets called
- Improve how the generic compare256 functions are handled.
- Allow overriding OPTIMAL_CMP

This simplifies the code and avoids having a lot of code in the compiled library than can never get executed.
2024-12-26 22:14:46 +01:00
Hans Kristian Rosbach
1aeb2915a0 Rename functions to get rid of old and now misleading "unaligned" naming 2024-12-26 22:14:46 +01:00
Adam Stylinski
04d1b75819 Make big endians first class citizens again
No longer do the big iron on yore which lack SIMD optimized loads need
to search strings a byte at a time like primitive machines of the vax
era. This guard here was mostly due to the fact that the string
comparison was searched with "count trailing zero", which assumes an
endianness.  We can just conditionally use leading zeros when on big
endian and stop using the extremely naive C implementation. This makes
things a tad bit faster.
2024-12-21 13:16:08 +01:00
Hans Kristian Rosbach
509f6b5818 Since we long ago make unaligned reads safe (by using memcpy or intrinsics),
it is time to replace the UNALIGNED_OK checks that have since really only been
used to select the optimal comparison sizes for the arch instead.
2024-12-21 00:46:48 +01:00
Hans Kristian Rosbach
037ab0fd35 Revert "Since we long ago make unaligned reads safe (by using memcpy or intrinsics),"
This reverts commit 80fffd72f3.
It was mistakenly pushed to develop instead of going through a PR and the appropriate reviews.
2024-12-17 23:09:31 +01:00
Hans Kristian Rosbach
80fffd72f3 Since we long ago make unaligned reads safe (by using memcpy or intrinsics),
it is time to replace the UNALIGNED_OK checks that have since really only been
used to select the optimal comparison sizes for the arch instead.
2024-12-17 23:02:32 +01:00
Adam Stylinski
94aacd8bd6 Try to simply the inflate loop by collapsing most cases to chunksets 2024-10-23 21:20:11 +02:00
Vladislav Shchapov
c694bcdaf6 Add option to disable runtime CPU detection
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
2024-03-06 23:32:15 +01:00
Hans Kristian Rosbach
9953f12e21 Move update_hash(), insert_string() and quick_insert_string() out of functable
and remove SSE4.2 and ACLE optimizations. The functable overhead is higher
than the benefit from using optimized functions.
2024-02-23 13:34:10 +01:00
Vladislav Shchapov
305b268b32 Move select for generic functions into generic_functions.h.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
2024-02-22 20:11:46 +01:00
Vladislav Shchapov
ac25a2ea6a Split CPU features checks and CPU-specific function prototypes and reduce include-dependencies.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
2024-02-22 20:11:46 +01:00