You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on a dataset that has a huge chunk of duplicate/near duplicate data, i am using HDBSCAN for clustering but i am getting multiple clusters of the duplicate data.
Ideally this shouldn't have happened and the whole chunk should have been in a single cluster, also i understand that it might be an affect of using UMAP on the BERT embeddings, but dimensionality reduction is required and can't be bypassed.
Also the idea of deduplication by dropping duplicates, perform HDBSCAN and then remap them to their designated clusters does works to a good extent but i am bound to NOT drop any datapoints for clustering.
I also came up with a method to not drop and remap the duplicates to the right cluster, but this feels just like a workaround and not a real fix.
Is there a proper fix for this issue, or is it a bug that HDBSCAN has based on how its designed...?
Thanks for your time and efforts.
Regards,
Sayyam
The text was updated successfully, but these errors were encountered:
Hi,
I am working on a dataset that has a huge chunk of duplicate/near duplicate data, i am using HDBSCAN for clustering but i am getting multiple clusters of the duplicate data.
Ideally this shouldn't have happened and the whole chunk should have been in a single cluster, also i understand that it might be an affect of using UMAP on the BERT embeddings, but dimensionality reduction is required and can't be bypassed.
Also the idea of deduplication by dropping duplicates, perform HDBSCAN and then remap them to their designated clusters does works to a good extent but i am bound to NOT drop any datapoints for clustering.
I also came up with a method to not drop and remap the duplicates to the right cluster, but this feels just like a workaround and not a real fix.
Is there a proper fix for this issue, or is it a bug that HDBSCAN has based on how its designed...?
Thanks for your time and efforts.
Regards,
Sayyam
The text was updated successfully, but these errors were encountered: