You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our bindings are becoming dependencies is more places. Each foreign language presents a unique opportunity for bugs and so must be held to some standard of quality assurance. Some combination of code quality, tests, and CI seem appropriate.
I asked ChatGPT 4.5 to do some deep research to inform this memo. It dives deep into how some other efforts with similar products handle tests. I definitely suggest taking a look
The common theme that I'd like to emphasize is:
Strong core testing (in our case, of the payjoin crate). E.g. fuzzing untrusted data at this layer can catch problems downstream.
Dedicated per-language tests in each language's usual testing infrastructure
Abstract and share test logic where feasible (ahead of the game with payjoin-test-utils already ⭐️)
Minimize glue complexity: Keep binding layers as thin and declarative as possible. You can see this getting hairy in our persistence update
Leverage Generators and IDLs: I can't help but wonder if we're making our lives difficult with flutter_rust_bridge AND uniffi. Could we just use uniffi and wrap the native swift/kotlin bindings instead?
Use CI to check builds at a minimum before declaring support. Minimize jobs to test core functionality across languages
For now, we've only actually shipped payjoin-flutter in product but we have a few more bindings coming.
payjoin-flutter
Payjoin Flutter's QA is based around a flutter demo payjoin app smoke test of the happy path. We should to automate this so that we can make updates more easily and verify more of the behavior.
The repo has generated code committed. The source code from which bindings are generated should be sufficient to define an API and therefore is all that should be committed. Generated code is not source. Then it would be more appropriate to leave instructions for generation that gets tested in CI than to commit generated files.
I would also like for there to be automated testing including the most common failure modes to demonstrate defense against common failure modes like disrupted relay / directory communication and probing attacks. This should be possible by introducing payjoin-test-utils bindings and incorporating all of the common test input to that library and running it at each languages bindings layer
python
Python is build and tested with uniffi, however nobody uses it downstream yet. For this reason it can be deprioritized.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Our bindings are becoming dependencies is more places. Each foreign language presents a unique opportunity for bugs and so must be held to some standard of quality assurance. Some combination of code quality, tests, and CI seem appropriate.
I asked ChatGPT 4.5 to do some deep research to inform this memo. It dives deep into how some other efforts with similar products handle tests. I definitely suggest taking a look
The common theme that I'd like to emphasize is:
payjoin
crate). E.g. fuzzing untrusted data at this layer can catch problems downstream.payjoin-test-utils
already ⭐️)For now, we've only actually shipped payjoin-flutter in product but we have a few more bindings coming.
payjoin-flutter
Payjoin Flutter's QA is based around a flutter demo payjoin app smoke test of the happy path. We should to automate this so that we can make updates more easily and verify more of the behavior.
The repo has generated code committed. The source code from which bindings are generated should be sufficient to define an API and therefore is all that should be committed. Generated code is not source. Then it would be more appropriate to leave instructions for generation that gets tested in CI than to commit generated files.
I would also like for there to be automated testing including the most common failure modes to demonstrate defense against common failure modes like disrupted relay / directory communication and probing attacks. This should be possible by introducing
payjoin-test-utils
bindings and incorporating all of the common test input to that library and running it at each languages bindings layerpython
Python is build and tested with uniffi, however nobody uses it downstream yet. For this reason it can be deprioritized.
Beta Was this translation helpful? Give feedback.
All reactions