Skip to content

Conversation

@eickenberg
Copy link

Very much still work in progress. This is as yet untested, but I am putting it online because I cannot currently find the recommendations of how this should be integrated and need some input.

Should I leave the torch code as an independent core_torch file and keep it minimal?

Another option is to try to operate with backends, since the files do not differ that much, but that would involve a lot more work, and modifications to the remaining code.

Any input appreciated, since I currently don't have the big picture.

@anwarnunez
Copy link
Contributor

anwarnunez commented May 9, 2020

Sweet!
I think we can keep it simple for now. A separate core_torch would be fine for the time being. The pyramid class that will end up using this can be explicit and a bit ad-hoc for the time being (e.g. moten.pyramids.MotionEnergyPyramidGPU()).

Long term, I think you're right and operating with a backend would be better and will require a lot more work. Though if we replace the np.dot with the @ syntax, we'll be 80% there.

An intermediate option is to do something hacky :

xp = torch if backend=='cuda' else np
if backend == 'cuda': 
    xp.dot = xp.mm

But maybe let's not do that, yet =)

I think it's better as you suggest and keep things independent and minimal. We can then refactor for a backend. On that front, any pointers to implementing a numeric backend would be useful.

Base automatically changed from master to main January 20, 2021 17:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants