-
-
Notifications
You must be signed in to change notification settings - Fork 26
Add ability to ignore some positions in the triangulation #128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Hi @JimmyCushnie, Many thanks for your contribution to the project! As you mentioned in the PR, there are some issues to resolve:
Best, |
| public NativeArray<bool> ignorePositions; | ||
| [NativeDisableContainerSafetyRestriction] | ||
| private NativeArray<int> ids; | ||
| private NativeList<int> ids; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You do not have to change array into list. See the following trick
var a = new NativeArray<int>(64, Allocator.Persistent);
UnityEngine.Debug.Log(a.Length); // Length = 64
var count = 0;
for (int i = 0; i < 32; i++)
{
a[count++] = i;
}
a = a.GetSubArray(0, count);
UnityEngine.Debug.Log(a.Length); // Length = 32
a.Dispose();You can combine if (ignorePositions.IsCreated && ignorePositions[i]) with this approach to populate the array. Utilizing Burst may offer better optimization, although it's not strictly necessary in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Neat. Is an array meaningfully faster here though? I thought NativeList was just a wrapper around NativeArray that does basically this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NativeArray is slightly faster than NativeList because it involves fewer bounds checks, and one less pointer indirection. It can also be easier for LLVM to optimize NativeArrays using SIMD, which can have a large impact.
For many workloads, the difference is insignificant, though.
|
The project use conventional commits, before merging, cleanup all messages, e.g. |
|
Regarding the failing CI, I'll need to update the repository's secrets and workflows to enable secrets for forks. I'm hoping to find time for it over the weekend. |
When creating a new triangulation job, you can now optionally pass a NativeArray<bool> called IgnorePositions. This array should be the same length as the Positions array. Any indexes that are true it IngorePositions will not be used in the triangulation. Example use case: regenerate a mesh's triangles but only use a subset of its vertices, without having to change the mesh's vertices array.
451148a to
24b6ea8
Compare
@JimmyCushnie Best, |
|
Nice PR. But I question why this PR is necessary? Can't the input be a deferred array that you fill with only the positions you want to triangulate? It seems somewhat out of scope for this library to handle ignoring arbitrary points. Furthermore, it adds indirect lookups in several locations, which has the potential to reduce performance for the common case. |
|
Hi @HalfVoxel,
I believe this could be useful in certain scenarios, as mentioned in the PR message. @JimmyCushnie says:
Certainly, one could utilize a deferred array, but this additional "mask" enables triangulation without the need for data preprocessing and postprocessing. Regarding performance I think it should not influence the performance when Additionally, I think this bool mask is really cheap operation when provided. Best, |
Co-authored-by: Andrzej Więckowski, Ph.D. <andywiecko@gmail.com>
Co-authored-by: Andrzej Więckowski, Ph.D. <andywiecko@gmail.com>
|
@HalfVoxel to add on to what Andrzej said, I'm using this to triangulate dozens of high-vertex-count meshes per second, and my project targets low-end mobile devices. The solution you describe would be too much performance overhead for my use case, not to mention it would significantly complicate my code compared to what I can do with this PR. Just for clarity, the main benefit of this feature for me is that the indices of the positions do not change when some of the indices are ignored. I acknowledge it's a pretty niche feature, but if we can do it without compromising performance on the common case I don't see why not to do it. |
|
I very much doubt that a job to do the filtering before the triangulation would have any significant performance impact. Something like this should do the trick, I think (written without any testing) [BurstCompile]
struct FilterPositions : IJob {
public NativeArray<float2> inputPositions;
public NativeList<float2> outputPositions;
public NativeList<int> outputIndices;
public NativeArray<bool> ignorePositions;
public void Execute() {
int k = 0;
outputPositions.Resize(inputPositions.Length, NativeArrayOptions.UninitializedMemory);
outputIndices.Resize(inputPositions.Length, NativeArrayOptions.UninitializedMemory);
for (int i = 0; i < inputPositions.Length; i++) {
if (!ignorePositions[i]) {
outputIndices[k] = i;
outputPositions[k] = inputPositions[i];
k++;
}
}
outputPositions.Length = k;
outputIndices.Length = k;
}
}
}Then you just feed the outputPositions to the triangulator as a deferred NativeArray. I know full well how easily a library can end up with special cases for tons of niche use cases, so I just want to suggest an alternative that I think will perform just as well. |
| min = math.min(min, p); | ||
| max = math.max(max, p); | ||
| ids[i] = i; | ||
| ids.Add(i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As context. This loop was probably fully vectorized before the PR. But with the PR, you have introduced three additional branches (two for the IF, and one for the Add method to check if the list needs resizing). This likely makes this particular loop a lot slower. However, it is likely that this loop is not a significant fraction of the run time anyway.
In any case, I'd recommend profiling with/without the PR changes to see if there's any perf impact in the common case. Make sure to profile without burst safety checks.
|
Also. As it is right now, the optional pre-processor in this package will still try to handle all points in the input. The input validation code will also not ignore them. It's not trivial to make the pre-processor just ignore them, as there's also some post-processing code to transform positions, that would need to be updated. |
Thanks, @HalfVoxel, for mentioning preprocessors! I forgot about them! 👍
I agree, and I think that this feature should be introduced on the "path of least resistance." This feature should be targeted at advanced users who would benefit from it. However, users who want to use it are probably not interested in preprocessors (or if they are, they should preprocess on their own, including ignoring positions). In my opinion, the best way to introduce it is with support only for Elaborating on my previous comment, the validation for this feature should include:
|
|
Hi @JimmyCushnie, How's it going? I've recently made huge progress with the package and made Best, |
As per our discussion on my fork, I am opening a PR for this feature.
When creating a new triangulation job, you can now optionally pass a
NativeArray<bool>calledIgnorePositions. This array should be the same length as the Positions array. Any indices inPositionsthat are true in IgnorePositions will not be used in the triangulation.I'm using this to regenerate the triangles of a mesh using a subset of its vertices without needing to update the mesh's vertices.
Things to address before merging:
IgnorePositionsis created, should we validate that it's the same length asPositions?Thank you @andywiecko for making this excellent library, for encouraging me to contribute, and for your generous help with this code.