8000 Allocate memory for benchmarkPolyfill on the heap by isaacbrodsky · Pull Request #198 · uber/h3 · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Allocate memory for benchmarkPolyfill on the heap #198

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 27, 2019

Conversation

isaacbrodsky
Copy link
Collaborator

Fixes #197

It looks like it's failing as it allocates something on the stack (usually for procedure entry, in this case for stack allocation), pointing to a problem with STACK_ARRAY_CALLOC. I changed the benchmark to allocate on the heap instead. This does mean the benchmark now also covers allocating the memory on the heap, that could be removed from the benchmark by moving the call to maxPolyfillSize outside the benchmark.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 67527cc on isaacbrodsky:benchmark-polyfill-heap into d610a22 on uber:master.

@apiszcz
Copy link
apiszcz commented Feb 27, 2019

Great news, on replicating the error and change.

Separate but somewhat related question:
I left a question on h3-py issues, is there planned windows support for the h3 python wrapper?

uber/h3-py#27

Copy link
Collaborator
@nrabinowitz nrabinowitz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix looks fine - does this indicate a general problem with STACK_ARRAY_CALLOC that we need to address?

@dfellis
Copy link
Collaborator
dfellis commented Feb 27, 2019

Fix looks fine - does this indicate a general problem with STACK_ARRAY_CALLOC that we need to address?

Allocating on a stack is always going to be a bit of a wild card -- you're eating into same memory you use to make function calls, so the amount you can safely allocate varies based on how deep in the stack you are. (Apparently 64-bit Windows consumes more of the stack before your program runs than 32-bit Windows and Unix-like OSes.) It's okay for small memory allocations because it can be faster than heap and is similar in a sense to just more arguments to your function, but for dynamic memory allocations it's not a great idea.

We already removed usage of STACK_ARRAY_CALLOC from libh3, but still use it in some of the benchmarks and other apps.

@isaacbrodsky I wasn't going to bring this up, but does this change remove the last calls to STACK_ARRAY_CALLOC?

@dfellis
Copy link
Collaborator
dfellis commented Feb 27, 2019

Great news, on replicating the error and change.

Separate but somewhat related question:
I left a question on h3-py issues, is there planned windows support for the h3 python wrapper?

uber/h3-py#27

That's more of a question for @TengHu but it's definitely doable.

I recently added Windows support to h3-node so if my free time permits I could tackle that.

@isaacbrodsky
Copy link
Collaborator Author

@isaacbrodsky I wasn't going to bring this up, but does this change remove the last calls to STACK_ARRAY_CALLOC?

It is not: it's still used in some tests. I think we could entirely remove it (and supporting build code), since that may help eliminate this type of issue in the future.

< 8000 div class="d-flex">

@isaacbrodsky isaacbrodsky merged commit 5174df2 into uber:master Feb 27, 2019
@isaacbrodsky isaacbrodsky deleted the benchmark-polyfill-heap branch February 27, 2019 19:57
@nrabinowitz
Copy link
Collaborator

I think we could entirely remove it (and supporting build code), since that may help eliminate this type of issue in the future.

👍

mrdvt92 pushed a commit to mrdvt92/h3 that referenced this pull request Jun 19, 2022
Allocate memory for benchmarkPolyfill on the heap
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0