Skip to content

chore(deps): Update dependency keras to v3#178

Open
renovate[bot] wants to merge 1 commit intomasterfrom
renovate/major-tensorflow-group
Open

chore(deps): Update dependency keras to v3#178
renovate[bot] wants to merge 1 commit intomasterfrom
renovate/major-tensorflow-group

Conversation

@renovate
Copy link
Copy Markdown
Contributor

@renovate renovate bot commented Nov 28, 2023

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
keras ==2.15.0==3.13.2 age confidence

Release Notes

keras-team/keras (keras)

v3.13.2

Compare Source

Security Fixes & Hardening

This release introduces critical security hardening for model loading and saving, alongside improvements to the JAX backend metadata handling.

  • Disallow TFSMLayer deserialization in safe_mode (#​22035)

    • Previously, TFSMLayer could load external TensorFlow SavedModels during deserialization without respecting Keras safe_mode. This could allow the execution of attacker-controlled graphs during model invocation.
    • TFSMLayer now enforces safe_mode by default. Deserialization via from_config() will raise a ValueError unless safe_mode=False is explicitly passed or keras.config.enable_unsafe_deserialization() is called.
  • Fix Denial of Service (DoS) in KerasFileEditor (#​21880)

    • Introduces validation for HDF5 dataset metadata to prevent "shape bomb" attacks.
    • Hardens the .keras file editor against malicious metadata that could cause dimension overflows or unbounded memory allocation (unbounded numpy allocation of multi-gigabyte tensors).
  • Block External Links in HDF5 files (#​22057)

    • Keras now explicitly disallows external links within HDF5 files during loading. This prevents potential security risks where a weight file could point to external system datasets.
    • Includes improved verification for H5 Groups and Datasets to ensure they are local and valid.

Backend-specific Improvements (JAX)

  • Set mutable=True by default in nnx_metadata (#​22074)
    • Updated the JAX backend logic to ensure that variables are treated as mutable by default in nnx_metadata.
    • This makes Keras 3.13.2 compatible with Flax 0.12.3 when the Keras NNX integration is enabled.

Saving & Serialization

  • Improved H5IOStore Integrity (#​22057)
    • Refactored H5IOStore and ShardedH5IOStore to remove unused, unverified methods.
    • Fixed key-ordering logic in sharded HDF5 stores to ensure consistent state loading across different environments.

Contributors

We would like to thank the following contributors for their security reports and code improvements:
@​0xManan, @​HyperPS, @​hertschuh, and @​divyashreepathihalli.

Full Changelog: keras-team/keras@v3.13.1...v3.13.2

v3.13.1

Compare Source

Bug Fixes & Improvements
  • General
    • Removed a persistent warning triggered during import keras when using NumPy 2.0 or higher. (#​21949)
  • Backends
    • JAX: Fixed an issue where CUDNN flash attention was broken when using JAX versions greater than 0.6.2. (#​21970)
  • Export & Serialization
    • Resolved a regression in the export pipeline that incorrectly forced batch sizes to be dynamic. The export process now correctly respects static batch sizes when defined. (#​21944)

Full Changelog: keras-team/keras@v3.13.0...v3.13.1

v3.13.0

Compare Source

BREAKING changes

Starting with version 3.13.0, Keras now requires Python 3.11 or higher. Please ensure your environment is updated to Python 3.11+ to install the latest version.

Highlights

LiteRT Export

You can now export Keras models directly to the LiteRT format (formerly TensorFlow Lite) for on-device inference.
This changes comes with improvements to input signature handling and export utility documentation. The changes ensure that LiteRT export is only available when TensorFlow is installed, update the export API and documentation, and enhance input signature inference for various model types.

Example:

import keras
import numpy as np

# 1. Define a simple model
model = keras.Sequential([
    keras.layers.Input(shape=(10,)),
    keras.layers.Dense(10, activation="relu"),
    keras.layers.Dense(1, activation="sigmoid")
])

# 2. Compile and train (optional, but recommended before export)
model.compile(optimizer="adam", loss="binary_crossentropy")
model.fit(np.random.rand(100, 10), np.random.randint(0, 2, 100), epochs=1)

# 3. Export the model to LiteRT format
model.export("my_model.tflite", format="litert")

print("Model exported successfully to 'my_model.tflite' using LiteRT format.")
GPTQ Quantization
  • Introduced keras.quantizers.QuantizationConfig API that allows for customizable weight and activation quantizers, providing greater flexibility in defining quantization schemes.

  • Introduced a new filters argument to the Model.quantize method, allowing users to specify which layers should be quantized using regex strings, lists of regex strings, or a callable function. This provides fine-grained control over the quantization process.

  • Refactored the GPTQ quantization process to remove heuristic-based model structure detection. Instead, the model's quantization structure can now be explicitly provided via GPTQConfig or by overriding a new Model.get_quantization_layer_structure method, enhancing flexibility and robustness for diverse model architectures.

  • Core layers such as Dense, EinsumDense, Embedding, and ReversibleEmbedding have been updated to accept and utilize the new QuantizationConfig object, enabling fine-grained control over their quantization behavior.

  • Added a new method get_quantization_layer_structure to the Model class, intended for model authors to define the topology required for structure-aware quantization modes like GPTQ.

  • Introduced a new utility function should_quantize_layer to centralize the logic for determining if a layer should be quantized based on the provided filters.

  • Enabled the serialization and deserialization of QuantizationConfig objects within Keras layers, allowing quantized models to be saved and loaded correctly.

  • Modified the AbsMaxQuantizer to allow specifying the quantization axis dynamically during the __call__ method, rather than strictly defining it at initialization.

Example:

  1. Default Quantization (Int8)
    Applies the default AbsMaxQuantizer to both weights and activations.
model.quantize("int8")
  1. Weight-Only Quantization (Int8)
    Disable activation quantization by setting the activation quantizer to None.
from keras.quantizers import Int8QuantizationConfig, AbsMaxQuantizer

config = Int8QuantizationConfig(
    weight_quantizer=AbsMaxQuantizer(axis=0),
    activation_quantizer=None 
)

model.quantize(config=config)
  1. Custom Quantization Parameters
    Customize the value range or other parameters for specific quantizers.
config = Int8QuantizationConfig(
    # Restrict range for symmetric quantization
    weight_quantizer=AbsMaxQuantizer(axis=0, value_range=(-127, 127)),
    activation_quantizer=AbsMaxQuantizer(axis=-1, value_range=(-127, 127))
)

model.quantize(config=config)
Adaptive Pooling layers

Added adaptive pooling operations keras.ops.nn.adaptive_average_pool and keras.ops.nn.adaptive_max_pool for 1D, 2D, and 3D inputs. These operations transform inputs of varying spatial dimensions into a fixed target shape defined by output_size by dynamically inferring the required kernel size and stride. Added corresponding layers:

  • keras.layers.AdaptiveAveragePooling1D
  • keras.layers.AdaptiveAveragePooling2D
  • keras.layers.AdaptiveAveragePooling3D
  • keras.layers.AdaptiveMaxPooling1D
  • keras.layers.AdaptiveMaxPooling2D
  • keras.layers.AdaptiveMaxPooling3D

New features

  • Add keras.ops.numpy.array_splitop a fundamental building block for tensor parallelism.
  • Add keras.ops.numpy.empty_like op.
  • Add keras.ops.numpy.ldexp op.
  • Add keras.ops.numpy.vander op which constructs a Vandermonde matrix from a 1-D input tensor.
  • Add keras.distribution.get_device_count utility function for distribution API.
  • keras.layers.JaxLayer and keras.layers.FlaxLayer now support the TensorFlow backend in addition to the JAX backed. This allows you to embed flax.linen.Module instances or JAX functions in your model. The TensorFlow support is based on jax2tf.

OpenVINO Backend Support:

  • Added numpy.digitize support.
  • Added numpy.diag support.
  • Added numpy.isin support.
  • Added numpy.vdot support.
  • Added numpy.floor_divide support.
  • Added numpy.roll support.
  • Added numpy.multi_hot support.
  • Added numpy.psnr support.
  • Added numpy.empty_like support.

Bug fixes and Improvements

  • NNX Support: Improved compatibility and fixed tests for the NNX library (JAX), ensuring better stability for NNX-based Keras models.
  • MultiHeadAttention: Fixed negative index handling in attention_axes for MultiHeadAttention layers.
  • Softmax: The update on Softmax mask handling, aimed at improving numerical robustness, was based on a deep investigation led by Jaswanth Sreeram, who prototyped the solution with contributions from others.
  • PyDataset Support: The Normalization layer's adapt method now supports PyDataset objects, allowing for proper adaptation when using this data type.

TPU Test setup

Configured the TPU testing infrastructure to enforce unit test coverage across the entire codebase. This ensures that both existing logic and all future contributions are validated for functionality and correctness within the TPU environment.

New Contributors

Full Changelog: keras-team/keras@v3.12.0...v3.13.0

v3.12.1

Compare Source

Security Fixes & Hardening

This release introduces critical security hardening for model loading and saving, alongside improvements to the JAX backend metadata handling.

  • Disallow TFSMLayer deserialization in safe_mode (#​22035)

    • Previously, TFSMLayer could load external TensorFlow SavedModels during deserialization without respecting Keras safe_mode. This could allow the execution of attacker-controlled graphs during model invocation.
    • TFSMLayer now enforces safe_mode by default. Deserialization via from_config() will raise a ValueError unless safe_mode=False is explicitly passed or keras.config.enable_unsafe_deserialization() is called.
  • Fix Denial of Service (DoS) in KerasFileEditor (#​21880)

    • Introduces validation for HDF5 dataset metadata to prevent "shape bomb" attacks.
    • Hardens the .keras file editor against malicious metadata that could cause dimension overflows or unbounded memory allocation (unbounded numpy allocation of multi-gigabyte tensors).
  • Block External Links in HDF5 files (#​22057)

    • Keras now explicitly disallows external links within HDF5 files during loading. This prevents potential security risks where a weight file could point to external system datasets.
    • Includes improved verification for H5 Groups and Datasets to ensure they are local and valid.

Saving & Serialization

  • Improved H5IOStore Integrity (#​22057)
    • Refactored H5IOStore and ShardedH5IOStore to remove unused, unverified methods.
    • Fixed key-ordering logic in sharded HDF5 stores to ensure consistent state loading across different environments.

Acknowledgments

Special thanks to the security researchers and contributors who reported these vulnerabilities and helped implement the fixes: @​0xManan, @​HyperPS, and @​hertschuh.

Full Changelog: keras-team/keras@v3.12.0...v3.12.1

v3.12.0: Keras 3.12.0

Compare Source

Highlights

Keras has a new model distillation API!

You now have access to an easy-to-use API for distilling large models into small models while minimizing performance drop on a reference dataset -- compatible with all existing Keras models. You can specify a range of different distillation losses, or create your own losses. The API supports multiple concurrent distillation losses at the same time.

Example:

# Load a model to distill
teacher = ...

# This is the model we want to distill it into
student = ...

# Configure the process
distiller = Distiller(
    teacher=teacher,
    student=student,
    distillation_losses=LogitsDistillation(temperature=3.0),
)
distiller.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

# Train the distilled model
distiller.fit(x_train, y_train, epochs=10)
Keras supports GPTQ quantization!

GPTQ is now built into the Keras API. GPTQ is a post-training, weights-only quantization method that compresses a model to int4 layer by layer. For each layer, it uses a second-order method to update weights while minimizing the error on a calibration dataset.

Learn how to use it in this guide.

Example:

model = keras_hub.models.Gemma3CausalLM.from_preset("gemma3_1b")
gptq_config = keras.quantizers.GPTQConfig(
    dataset=calibration_dataset,
    tokenizer=model.preprocessor.tokenizer,
    weight_bits=4,
    group_size=128,
    num_samples=256,
    sequence_length=256,
    hessian_damping=0.01,
    symmetric=False,
    activation_order=False,
)
model.quantize("gptq", config=gptq_config)
outputs = model.generate(prompt, max_length=30)
Better support for Grain datasets!
  • Add Grain support to keras.utils.image_dataset_from_directory and keras.utils.text_dataset_from_directory. Specify format="grain" to return a Grain dataset instead of a TF dataset.
  • Make almost all Keras preprocessing layers compatible with Grain datasets.

New features

  • Add keras.layers.ReversibleEmbedding layer: an embedding layer that can also also project backwards to the input space. Use it with the reverse argument in call().
  • Add argument opset_version in model.export(). Argument specific to format="onnx"; specifies the ONNX opset version.
  • Add keras.ops.isin op.
  • Add keras.ops.isneginf, keras.ops.isposinf ops.
  • Add keras.ops.isreal op.
  • Add keras.ops.cholesky_inverse op and add upper argument in keras.ops.cholesky.
  • Add keras.ops.image.scale_and_translate op.
  • Add keras.ops.hypot op.
  • Add keras.ops.gcd op.
  • Add keras.ops.kron op.
  • Add keras.ops.logaddexp2 op.
  • Add keras.ops.view op.
  • Add keras.ops.unfold op.
  • Add keras.ops.jvp op.
  • Add keras.ops.trapezoid op.
  • Add support for over 20 news ops with the OpenVINO backend.

Breaking changes

  • Layers StringLookup & IntegerLookup now save vocabulary loaded from file. Previously, when instantiating these layers from a vocabulary filepath, only the filepath would be saved when saving the layer. Now, the entire vocabulary is materialized and saved as part of the .keras archive.

Security fixes

New Contributors

Full Changelog: keras-team/keras@v3.11.0...v3.12.0

v3.11.3: Keras 3.11.3

Compare Source

What's Changed

Full Changelog: keras-team/keras@v3.11.2...v3.11.3

v3.11.2: Keras 3.11.2

Compare Source

What's Changed

New Contributors

Full Changelog: keras-team/keras@v3.11.1...v3.11.2

v3.11.1: Keras 3.11.1

Compare Source

What's Changed

Full Changelog: keras-team/keras@v3.11.0...v3.11.1

v3.11.0: Keras 3.11.0

Compare Source

What's Changed
  • Add int4 quantization support.
  • Support Grain data loaders in fit()/evaluate()/predict().
  • Add keras.ops.kaiser function.
  • Add keras.ops.hanning function.
  • Add keras.ops.cbrt function.
  • Add keras.ops.deg2rad function.
  • Add keras.ops.layer_normalization function to leverage backend-specific performance optimizations.
  • Various bug fixes and performance optimizations.
Backend-specific changes
JAX backend
  • Support NNX library. It is now possible to use Keras layers and models as NNX modules.
  • Support shape -1 for slice op.
TensorFlow backend
  • Add support for multiple dynamic dimensions in Flatten layer.
OpenVINO backend
  • Add support for over 30 new backend ops.
New Contributors

Full Changelog: keras-team/keras@v3.10.0...v3.11.0

v3.10.0: Keras 3.10.0

Compare Source

New features

  • Add support for weight sharding for saving very large models with model.save(). It is controlled via the max_shard_size argument. Specifying this argument will split your Keras model weight file into chunks of this size at most. Use load_model() to reload the sharded files.
  • Add optimizer keras.optimizers.Muon
  • Add image preprocessing layer keras.layers.RandomElasticTransform
  • Add loss function keras.losses.CategoricalGeneralizedCrossEntropy (with functional version keras.losses.categorical_generalized_cross_entropy)
  • Add axis argument to SparseCategoricalCrossentropy
  • Add lora_alpha to all LoRA-enabled layers. If set, this parameter scales the low-rank adaptation delta during the forward pass.
  • Add activation function keras.activations.sparse_sigmoid
  • Add op keras.ops.image.elastic_transform
  • Add op keras.ops.angle
  • Add op keras.ops.bartlett
  • Add op keras.ops.blackman
  • Add op keras.ops.hamming
  • Add ops keras.ops.view_as_complex, keras.ops.view_as_real
PyTorch backend
  • Add cuDNN support for LSTM with the PyTorch backend
TensorFlow backend
  • Add tf.RaggedTensor support to Embedding layer
  • Add variable-level support for synchronization argument
OpenVINO backend
  • Add support for over 50 additional Keras ops in the OpenVINO inference backend!

New Contributors

Full Changelog: keras-team/keras@v3.9.0...v3.10.0

v3.9.2: Keras 3.9.2

Compare Source

What's Changed
  • Fix Remat error when called with a model.

Full Changelog: keras-team/keras@v3.9.1...v3.9.2

v3.9.1: Keras 3.9.1

Compare Source

What's Changed
  • Fix flash attention TPU error
  • Fix incorrect argument in JAX flash attention.

Full Changelog: keras-team/keras@v3.9.0...v3.9.1

v3.9.0: Keras 3.9.0

Compare Source

New features
  • Add new Keras rematerialization API: keras.RematScope and keras.remat. It can be used to turn on rematerizaliation for certain layers in fine-grained manner, e.g. only for layers larger than a certain size, or for a specific set of layers, or only for activations.
  • Increase op coverage for OpenVINO backend.
  • New operations:
    • keras.ops.rot90
    • keras.ops.rearrange (Einops-style)
    • keras.ops.signbit
    • keras.ops.polar
    • keras.ops.image.perspective_transform
    • keras.ops.image.gaussian_blur
  • New layers:
    • keras.layers.RMSNormalization
    • keras.layers.AugMix
    • keras.layers.CutMix
    • keras.layers.RandomInvert
    • keras.layers.RandomErasing
    • keras.layers.RandomGaussianBlur
    • keras.layers.RandomPerspective
  • Minor additions:
    • Add support for dtype argument to JaxLayer and FlaxLayer layers
    • Add boolean input support to BinaryAccuracy metric
    • Add antialias argument to keras.layers.Resizing layer.
  • Security fix: disallow object pickling in saved npz model files (numpy format). Thanks to Peng Zhou for reporting the vulnerability.
New Contributors

Full Changelog: keras-team/keras@v3.8.0...v3.9.0

v3.8.0: Keras 3.8.0

Compare Source

New: OpenVINO backend

OpenVINO is now available as an infererence-only Keras backend. You can start using it by setting the backend field to "openvino" in your keras.json config file.

OpenVINO is a deep learning inference-only framework tailored for CPU (x86, ARM), certain GPUs (OpenCL capable, integrated and discrete) and certain AI accelerators (Intel NPU).

Because OpenVINO does not support gradients, you cannot use it for training (e.g. model.fit()) -- only inference. You can train your models with the JAX/TensorFlow/PyTorch backends, and when trained, reload them with the OpenVINO backend for inference on a target device supported by OpenVINO.

New: ONNX model export

You can now export your Keras models to the ONNX format from the JAX, TensorFlow, and PyTorch backends.

Just pass format="onnx" in your model.export() call:

# Export the model as a ONNX artifact
model.export("path/to/location", format="onnx")

# Load the artifact in a different process/environment
ort_session = onnxruntime.InferenceSession("path/to/location")

# Run inference
ort_inputs = {
    k.name: v for k, v in zip(ort_session.get_inputs(), input_data)
}
predictions = ort_session.run(None, ort_inputs)
New: Scikit-Learn API compatibility interface

It's now possible to easily integrate Keras models into Sciki-Learn pipelines! The following wrapper classes are available:

  • keras.wrappers.SKLearnClassifier: implements the sklearn Classifier API
  • keras.wrappers.SKLearnRegressor: implements the sklearn Regressor API
  • keras.wrappers.SKLearnTransformer: implements the sklearn Transformer API
Other feature additions
  • Add new ops:
    • Add keras.ops.diagflat
    • Add keras.ops.unravel_index
  • Add new activations:
    • Add sparse_plus activation
    • Add sparsemax activation
  • Add new image augmentation and preprocessing layers:
    • Add keras.layers.RandAugment
    • Add keras.layers.Equalization
    • Add keras.layers.MixUp
    • Add keras.layers.RandomHue
    • Add keras.layers.RandomGrayscale
    • Add keras.layers.RandomSaturation
    • Add keras.layers.RandomColorJitter
    • Add keras.layers.RandomColorDegeneration
    • Add keras.layers.RandomSharpness
    • Add keras.layers.RandomShear
  • Add argument axis to tversky loss
JAX specific changes
  • Add support for JAX named scope
TensorFlow specific changes
  • Make keras.random.shuffle XLA compilable
PyTorch specific changes
  • Add support for model.export() and keras.export.ExportArchive with the PyTorch backend, supporting both the TF SavedModel format and the ONNX format.
New Contributors

Full Changelog: keras-team/keras@v3.7.0...v3.8.0

v3.7.0: Keras 3.7.0

Compare Source

API changes
  • Add flash_attention argument to keras.ops.dot_product_attention and to keras.layers.MultiHeadAttention.
  • Add keras.layers.STFTSpectrogram layer (to extract STFT spectrograms from inputs as a preprocessing step) as well as its initializer keras.initializers.STFTInitializer.
  • Add celu, glu, log_sigmoid, hard_tanh, hard_shrink, squareplus activations.
  • Add keras.losses.Circle loss.
  • Add image visualization utilities keras.visualization.draw_bounding_boxes, keras.visualization.draw_segmentation_masks, keras.visualization.plot_image_gallery, keras.visualization.plot_segmentation_mask_gallery.
  • Add double_checkpoint argument to BackupAndRestore to save a fallback checkpoint in case the first checkpoint gets corrupted.
  • Add bounding box preprocessing support to image augmentation layers CenterCrop, RandomFlip, RandomZoom, RandomTranslation, RandomCrop.
  • Add keras.ops.exp2, keras.ops.inner operations.
Performance improvements
  • JAX backend: add native Flash Attention support for GPU (via cuDNN) and TPU (via a Pallas kernel). Flash Attention is now used automatically when the hardware supports it.
  • PyTorch backend: add native Flash Attention support for GPU (via cuDNN). It is currently opt-in.
  • TensorFlow backend: enable more kernel fusion via bias_add.
  • PyTorch backend: add support for Intel XPU devices.
New Contributors

Full Changelog: keras-team/keras@v3.6.0...v3.7.0

v3.6.0: Keras 3.6.0

Compare Source

Highlights
  • New file editor utility: keras.saving.KerasFileEditor. Use it to inspect, diff, modify and resave Keras weights files. See basic workflow here.
  • New keras.utils.Config class for managing experiment config parameters.
BREAKING changes
  • When using keras.utils.get_file, with extract=True or untar=True, the return value will be the path of the extracted directory, rather than the path of the archive.
Other changes and additions
  • Logging is now asynchronous in fit(), evaluate(), predict(). This enables 100% compact stacking of train_step calls on accelerators (e.g. when running small models on TPU).
    • If you are using custom callbacks that rely on on_batch_end, this will disable async logging. You can force it back by adding self.async_safe = True to your callbacks. Note that the TensorBoard callback isn't considered async safe by default. Default callbacks like the progress bar are async safe.
  • Added keras.saving.KerasFileEditor utility to inspect, diff, modify and resave Keras weights file.
  • Added keras.utils.Config class. It behaves like a dictionary, with a few nice features:
    • All entries are accessible and settable as attributes, in addition to dict-style (e.g. config.foo = 2 or config["foo"] are both valid)
    • You can easily serialize it to JSON via config.to_json().
    • You can easily freeze it, preventing future changes, via config.freeze().
  • Added bitwise numpy ops:
    • bitwise_and
    • bitwise_invert
    • bitwise_left_shift
    • bitwise_not
    • bitwise_or
    • bitwise_right_shift
    • bitwise_xor
  • Added math op keras.ops.logdet.
  • Added numpy op keras.ops.trunc.
  • Added keras.ops.dot_product_attention.
  • Added keras.ops.histogram.
  • Allow infinite PyDataset instances to use multithreading.
  • Added argument verbose in keras.saving.ExportArchive.write_out() method for exporting TF SavedModel.
  • Added epsilon argument in keras.ops.normalize.
  • Added Model.get_state_tree() method for retrieving a nested dict mapping variable paths to variable values (either as numpy arrays or backend tensors (default)). This is useful for rolling out custom JAX training loops.
  • Added image augmentation/preprocessing layers keras.layers.AutoContrast, keras.layers.Solarization.
  • Added keras.layers.Pipeline class, to apply a sequence of layers to an input. This class is useful to build a preprocessing pipeline. Compared to a Sequential model, Pipeline features a few important differences:
    • It's not a Model, just a plain layer.
    • When the layers in the pipeline are compatible with tf.data, the pipeline will also remain tf.data compatible, independently of the backend you use.
New Contributors

Full Changelog: keras-team/keras@v3.5.0...v3.6.0

v3.5.0: Keras 3.5.0

Compare Source

What's Changed
  • Add integration with the Hugging Face Hub. You can now save models to Hugging Face Hub directly from keras.Model.save() and load .keras models directly from Hugging Face Hub with keras.saving.load_model().
  • Ensure compatibility with NumPy 2.0.
  • Add keras.optimizers.Lamb optimizer.
  • Improve keras.distribution API support for very large models.
  • Add keras.ops.associative_scan op.
  • Add keras.ops.searchsorted op.
  • Add keras.utils.PyDataset.on_epoch_begin() method.
  • Add data_format argument to keras.layers.ZeroPadding1D layer.
  • Bug fixes and performance improvements.

Full Changelog: keras-team/keras@v3.4.1...v3.5.0

v3.4.1: Keras 3.4.1

Compare Source

This is a minor bugfix release.

v3.4.0: Keras 3.4.0

Compare Source

Highlights
  • Add support for arbitrary, deeply nested input/output structures in Functional models (e.g. dicts of dicts of lists of inputs or outputs...)
  • Add support for optional Functional inputs.
  • Introduce keras.dtype_policies.DTypePolicyMap for easy configuration of dtype policies of nested sublayers of a subclassed layer/model.
  • New ops:
    • keras.ops.argpartition
    • keras.ops.scan
    • keras.ops.lstsq
    • keras.ops.switch
    • keras.ops.dtype
    • keras.ops.map
    • keras.ops.image.rgb_to_hsv
    • keras.ops.image.hsv_to_rgb
What's changed
  • Add support for float8 inference for Dense and EinsumDense layers.
  • Add custom name argument in all Keras Applications models.
  • Add axis argument in keras.losses.Dice.
  • Enable keras.utils.FeatureSpace to be used in a tf.data pipeline even when the backend isn't TensorFlow.
  • StringLookup layer can now take tf.SparseTensor as input.
  • Metric.variables is now recursive.
  • Add training argument to Model.compute_loss().
  • Add dtype argument to all losses.
  • keras.utils.split_dataset now supports nested structures in dataset.
  • Bugs fixes and performance improvements.

Full Changelog: keras-team/keras@v3.3.3...v3.4.0

v3.3.3: Keras 3.3.3

Compare Source

This is a minor bugfix release.

v3.3.2: Keras 3.3.2

Compare Source

This is a simple fix release that re-surfaces legacy Keras 2 APIs that aren't part of Keras package proper, but that are still featured in tf.keras. No other content has changed.

v3.3.1: Keras 3.3.1

Compare Source

This is a simple fix release that moves the legacy _tf_keras API directory to the root of the Keras pip package. This is done in order to preserve import paths like from tensorflow.keras import layers without making any changes to the TensorFlow API files.

No other content has changed.

v3.3.0: Keras 3.3.0

Compare Source

What's Changed
  • Introduce float8 training.
  • Add LoRA to ConvND layers.
  • Add keras.ops.ctc_decode for JAX and TensorFlow.
  • Add keras.ops.vectorize, keras.ops.select.
  • Add keras.ops.image.rgb_to_grayscale.
  • Add keras.losses.Tversky loss.
  • Add full bincount and digitize sparse support.
  • Models and layers now return owned metrics recursively.
  • Add pickling support for Keras models. Note that pickling is not recommended, prefer using Keras saving APIs.
  • Bug fixes and performance improvements.

In addition, the codebase structure has evolved:

  • All source files are now in keras/src/.
  • All API files are now in keras/api/.
  • The codebase structure stays unchanged when building the Keras pip package. This means you can pip install Keras directly from the GitHub sources.
New Contributors

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from 33964c0 to 8e3d120 Compare December 6, 2023 21:35
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from 8e3d120 to b85553d Compare December 21, 2023 19:52
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch 2 times, most recently from dd5fd91 to 1cfe049 Compare January 20, 2024 21:50
@mura-kisukirurira mura-kisukirurira self-assigned this Feb 9, 2024
@mura-kisukirurira mura-kisukirurira self-requested a review February 9, 2024 04:36
ozaki0150
ozaki0150 previously approved these changes Feb 9, 2024
@renovate renovate bot dismissed stale reviews from ozaki0150 and mura-kisukirurira via a3d4b6c February 15, 2024 00:49
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from 1cfe049 to a3d4b6c Compare February 15, 2024 00:49
ozaki0150
ozaki0150 previously approved these changes Feb 15, 2024
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch 2 times, most recently from 6aa1910 to 3f29ed5 Compare March 19, 2024 20:10
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch 2 times, most recently from aeb8dc0 to ed18add Compare April 10, 2024 23:23
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch 3 times, most recently from 064fdb2 to 4bd05b1 Compare April 27, 2024 01:31
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch 2 times, most recently from 8afb5de to a9aedf1 Compare June 26, 2024 16:48
ozaki0150
ozaki0150 previously approved these changes Jul 17, 2024
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from a9aedf1 to ce7fe8d Compare August 12, 2024 22:50
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch 2 times, most recently from c2855bd to 2a09a25 Compare October 3, 2024 20:02
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from 2a09a25 to e1961f5 Compare November 7, 2024 04:37
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from e1961f5 to a497bb1 Compare November 26, 2024 19:55
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from a497bb1 to a8c1e23 Compare January 7, 2025 20:13
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from a8c1e23 to 5f8b43c Compare March 5, 2025 02:32
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch 2 times, most recently from de52122 to 2eefbdc Compare April 2, 2025 23:10
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from 2eefbdc to 4c2d3bc Compare May 20, 2025 00:03
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch 2 times, most recently from 50b3d16 to 8b3d108 Compare August 12, 2025 02:39
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from 8b3d108 to 18fa0b8 Compare August 21, 2025 22:43
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from 18fa0b8 to 1819890 Compare October 27, 2025 21:04
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from 1819890 to ac439ea Compare December 18, 2025 01:00
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from ac439ea to e4dfc31 Compare January 14, 2026 21:35
@renovate renovate bot force-pushed the renovate/major-tensorflow-group branch from e4dfc31 to 214a5fa Compare January 30, 2026 01:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants