Skip to content

Request for non-cumulative prediction options #21642

@ErfanMowlaei

Description

@ErfanMowlaei

The story:
I am facing an issue with large data that using model.predict + a result off-loading callback (to write the results to disk) is giving a much better performance compared to looping over batches and using model.predict_on_batch() or model(input, training=False) with offloading operation. However, model.predict is accumulating the results and causes out-of-memory error.

The request
I was wondering if you could add an argument to model.predict that would prevent it from accumulating the model predictions, allowing one to use a callback to handle the results.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions