Cameron Cawley
d7e121e56b
Use GCC's may_alias attribute for unaligned memory access
2024-12-24 12:55:44 +01:00
Hans Kristian Rosbach
509f6b5818
Since we long ago make unaligned reads safe (by using memcpy or intrinsics),
...
it is time to replace the UNALIGNED_OK checks that have since really only been
used to select the optimal comparison sizes for the arch instead.
2024-12-21 00:46:48 +01:00
Hans Kristian Rosbach
037ab0fd35
Revert "Since we long ago make unaligned reads safe (by using memcpy or intrinsics),"
...
This reverts commit 80fffd72f3
.
It was mistakenly pushed to develop instead of going through a PR and the appropriate reviews.
2024-12-17 23:09:31 +01:00
Hans Kristian Rosbach
80fffd72f3
Since we long ago make unaligned reads safe (by using memcpy or intrinsics),
...
it is time to replace the UNALIGNED_OK checks that have since really only been
used to select the optimal comparison sizes for the arch instead.
2024-12-17 23:02:32 +01:00
Cameron Cawley
c327d18131
Make use of unaligned loads on big endian in insert_string
2024-09-15 16:07:50 +02:00
Hans Kristian Rosbach
ef2f8d528c
Remove unused 's' parameter from HASH_CALC macro
2024-02-23 13:34:10 +01:00
Nathan Moinvaziri
a090529ece
Remove deflate_state parameter from update_hash functions.
2024-02-23 13:34:10 +01:00
Nathan Moinvaziri
fc63426372
Update copyright years in other source files.
2024-02-07 19:15:56 +01:00
Dimitri Papadopoulos
73bbb54cf6
Discard repeated words
2023-08-07 08:28:16 +02:00
Nathan Moinvaziri
e22195e5bc
Don't use unaligned access for memcpy instructions due to GCC 11 assuming it is aligned in certain instances.
2022-08-17 14:41:18 +02:00
Nathan Moinvaziri
363a95fb9b
Introduce zmemcpy to use unaligned access for architectures we know support unaligned access, otherwise use memcpy.
2022-02-10 16:10:48 +01:00
Nathan Moinvaziri
5bc87f1581
Use memcpy for unaligned reads.
...
Co-authored-by: Matija Skala <mskala@gmx.com>
2022-01-08 14:33:19 +01:00
Nathan Moinvaziri
f77af71e77
Fixed trailing whitespaces and missing new lines.
2021-09-22 16:00:46 +02:00
Nathan Moinvaziri
5998d5b632
Added update_hash to build hash incrementally.
2021-06-25 20:09:14 +02:00
Nathan Moinvaziri
6948789969
Added rolling hash functions for hash table.
2021-06-25 20:09:14 +02:00
Hans Kristian Rosbach
cf9127a231
Separate MIN_MATCH into STD_MIN_MATCH and WANT_MIN_MATCH
...
Rename MAX_MATCH to STD_MAX_MATCH
2021-06-13 20:55:01 +02:00
Nathan Moinvaziri
020b5be33e
Fixed str uint32_t to uint16_t casting warnings in inflate_string_tpl.h
...
insert_string_tpl.h(50,26): warning C4244: '=': conversion from 'const uint32_t' to 'Pos', possible loss of data
insert_string_tpl.h(67,1): warning C4244: 'initializing': conversion from 'const uint32_t' to 'Pos', possible loss of data
2020-11-02 17:01:58 +01:00
Nathan Moinvaziri
7cffba4dd6
Rename ZLIB_INTERNAL to Z_INTERNAL for consistency.
2020-08-31 12:33:16 +02:00
Hans Kristian Rosbach
e7bb6db09a
Replace hash_bits, hash_size and hash_mask with defines.
2020-08-23 09:57:45 +02:00
Hans Kristian Rosbach
0cd1818e86
Remove return value from insert_string, since it is always ignored and
...
quick_insert_string is being used instead.
2020-08-21 09:46:03 +02:00
Hans Kristian Rosbach
5b5677abd3
Now that the check is out of the loop, it is also safe to remove it
...
and unconditionally return head.
2020-08-21 09:46:03 +02:00
Hans Kristian Rosbach
9b6af1519c
Minor optimization of insert_string template.
2020-08-21 09:46:03 +02:00
Nathan Moinvaziri
38e5e4b20c
Store hash_mask in local variable for insert_string loop.
2020-08-20 21:49:17 +02:00
Nathan Moinvaziri
dd753715a9
Move zero check for insert_string count to fill_window since it is the only place where count is ever passed as zero.
2020-08-20 21:49:17 +02:00
Nathan Moinvaziri
9ee4f8a100
Fixed many possible loss of data warnings where insert_string and quick_insert_string function used on Windows.
2020-08-14 22:20:50 +02:00
Nathan Moinvaziri
7e3d9be44c
Change quick_insert_string memory access to be similar to insert_string.
2020-05-30 21:25:18 +02:00
Nathan Moinvaziri
9b7a52352c
Remove extra lines between functions and their comments.
2020-05-30 21:25:18 +02:00
Nathan Moinvaziri
0129e88cee
Removed TRIGGER_LEVEL byte masking from INSERT_STRING and UPDATE_HASH due to poor performance on levels 6 and 9 especially with optimized versions of UPDATE_HASH.
...
From commit d306c75d3b
:
.. we hash 4 bytes, instead of 3, for certain levels. This shortens the hash chains, and also improves the quality
of each hash entry.
2020-04-30 10:01:46 +02:00
Nathan Moinvaziri
69bbb0d823
Standardize insert_string functionality across architectures. Added unaligned conditionally compiled code for insert_string and quick_insert_string. Unify sse42 crc32 assembly between insert_string and quick_insert_string. Modified quick_insert_string to work across architectures.
2020-04-30 10:01:46 +02:00