8000 Fix decode strategy when splitting a chunk by arthurstomp · Pull Request #3 · cfcosta/muhex · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Fix decode strategy when splitting a chunk #3

8000 New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

arthurstomp
Copy link
Contributor

This PR 8000 fixes the remainig failing test in CI.

The encode function creates its 2x bytes output by merging 2 vectors of x bytes intercalating the values by if the index is even or odd. However, in the decode function splits the same 2 x bytes vector by just cutting it in half. That difference in strategy on splitting those larger vector seems to be the cause of the failing test.

This PR adds new function nibble_chunk that splits a larger vector in 2 smaller vectors using the even/odd strategy from the encode function.

@arthurstomp
Copy link
Contributor Author

@cfcosta, is this failure of the build being caused by a formatting issue?

Below is the log i get from nix-store when i try nix flake check.

WARN no formatter for path: LICENSE-APACHE
traversed 4 files
emitted 2 files for processing
formatted 2 files (0 changed) in 24ms
2025/02/13 20:43:58 INFO using config file: /nix/store/rpyryr563jjm9li5x8avljprd4haqpkm-treefmt.toml
ERRO file has changed path=src/lib.rs prev_size=7139 prev_mod_time="1970-01-01 00:00:01 +0000 UTC" current_size=7141 current_mod_time="2025-02-13 20:43:58 +0000 UTC"
traversed 1 files
emitted 1 files for processing
formatted 1 files (1 changed) in 30ms
Error: unexpected changes detected, --fail-on-change is enabled

diff --git a/src/lib.rs b/src/lib.rs
index c74f202..264fbe1 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -145,19 +145,18 @@ fn decode_hex_nibble(n: u8x16) -> Result<u8x16, Error> {
 #[inline]
 fn nibble_chunck(chunk: &[u8]) -> (u8x16, u8x16) {
     let parsed_chunk = u8x32::from_slice(chunk);
-    let mut high_bytes_vec: Vec<u8> = vec![];
-    let mut low_bytes_vec: Vec<u8> = vec![];
-    for (index,piece) in parsed_chunk.to_array().iter().enumerate() {
+    let mut high_bytes_vec: Vec<u8> = vec![];
+    let mut low_bytes_vec: Vec<u8> = vec![];
+    for (index, piece) in parsed_chunk.to_array().iter().enumerate() {
         if index % 2 == 0 {
             high_bytes_vec.push(piece.clone())
         } else {
             low_bytes_vec.push(piece.clone())
         }
-
     }
     (
-      u8x16::from_slice(&high_bytes_vec),
-      u8x16::from_slice(&low_bytes_vec),
+        u8x16::from_slice(&high_bytes_vec),
+        u8x16::from_slice(&low_bytes_vec),
     )
 }

@arthurstomp
Copy link
Contributor Author

Test are passing, though

image

Copy link
Owner
@cfcosta cfcosta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very nice! just a little nitpicks.

@@ -142,6 +142,22 @@ fn decode_hex_nibble(n: u8x16) -> Result<u8x16, Error> {
Ok(result)
}

#[inline]
fn nibble_chunck(chunk: &[u8]) -> (u8x16, u8x16) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nibble_chunk

#[inline]
fn nibble_chunck(chunk: &[u8]) -> (u8x16, u8x16) {
let parsed_chunk = u8x32::from_slice(chunk);
let mut high_bytes_vec: Vec<u8> = vec![];
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using vec to allocate on the heap, can we just set [u8; X] directly? that way we don't need to allocate memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0