Skip to content
Failed

#797 (Dec 10, 2020, 11:08:00 AM)

Started 4 yr 5 mo ago
Took 11 min on powerci-docker1

Started by timer

Revision: 9ca469ee6d2d716eb4c03fdee4d5d21d40b9a775
Repository: https://github.com/tensorflow/tensorflow.git
  • refs/remotes/origin/master
Changes
  1. Add GPU kernel for SparseApplyFtrl (details)
  2. Add comment to SparseApplyFtrl about scalar checks (details)
  3. [ROCm] Re-enabling unit-tests that are now passing on ROCm platform (details)
  4. Add/fix comments in SparseApplyFtrl kernel (details)
  5. TFLu: Fix bug in PPD op (details)
  6. Build for TARGET_ARCH=fusion_f1 via reference kernel fallbacks. (details)
  7. Downgrade Pip<20.3 and rebuild Dockerfiles (details)
  8. Add casts to std::min/std::max when comparing mis-matched types (details)
  9. clang-format + fix missing header that caused a build-error after clang-formatting. (details)
  10. Add bitwise_and operation definition to kernel generator. (details)
  11. Add StringLength to Flex delegate (details)
  12. Add unranked kernel definition for bitwise or operation. (details)
  13. [XLA:GPU] Update and improve documentation of the FusionMerger HLO pass. (details)
  14. Add BitwiseXor unranked kernel definition. (details)
  15. [XLA/GPU] Migrate all unnested elementwise emitters. (details)
  16. Handle return values in MergeControlFlow pass. (details)
  17. Rollback changelist 338246477 (details)
  18. [XLA/GPU] Add a debug flag to override compilation parallelism. (details)
  19. Add LogicalAnd and LogicalOr kernel definitions. (details)
  20. Change the TF Lite Java API to use the shim layer. (details)
  21. [XLA/GPU] Migrate all unnested elementwise emitters. (details)
  22. [xprof:gpu] Make GPU occupancy precentage values in the range of 0 to 100 instead of 0.0 to 1.0. (details)
  23. Add C++ loop memory leak test for MemoryChecker (details)
  24. Correct benchmark parameters for inputs. (details)
  25. Polish TensorRT static linking a little. (details)
  26. TFLGpuDelegateSetCommandEncoder replaced by TFLGpuDelegateSetCommandBuffer. (details)
  27. Generate and test kernels for Greater(Equal), Less(Equal) and NotEqual (details)
  28. Support inferring dynamism of reduce that shows up multiple times in a kSelect operand list. (details)
  29. Apply correct quantization schemes for LSTM inputs (details)
  30. Update Strip strings to also clear signatures in the flatbuffer. (details)
  31. Add a no-copy SparseToDense method. (details)
  32. Integrate LLVM at llvm/llvm-project@6883042528d0 (details)
  33. Change the TF Lite Java API to use the shim layer. (details)
  34. Add the Apache header to new files. (details)
  35. Support sparse CNN inference by default through XNNPACK delegate (details)
  36. Clean up quantize test. Adjust scales to work correctly with xtensa kernels. (details)
  37. Security advisories for 2.4 releases. (details)
  38. Look for an existing bfloat16 type and prefer its use. (details)
  39. [XLA/GPU] Migrate all unnested elementwise emitters. (details)
  40. Extract a PjRtBuffer interface. (details)
  41. Add custom builder for tf_device.replicate that accepts a DictionaryAttr instead of a llvm::SmallDenseMap<StringRef, llvm::SmallVector<StringRef, 4>> for devices, and expose WrapsSingleOp, similar to tf_device.launch. (details)
  42. [tf.data] Fix some comment in the gradient descent algorithm. (details)
  43. Create a fuzzer for `AreAttrValuesEqual` and `FastAreAttrValuesEqual`. (details)
  44. Add inliner pass to `CompileSerializedMlirToXlaHlo`. (details)
  45. [XLA] NFC: Remove unnecessary `ParticipantImplOutput` type from `Rendezvous`. (details)
  46. Support dynamic sample size in categorical op. (details)
  47. Mark tfl.EqualOp with NoSideEffect (details)
  48. Using MetalSpatialTensors for tensors with preallocated ids. (details)
  49. Correct implementation of TFLGpuDelegateSetCommandBuffer. (details)
  50. [XLA/GPU] Migrate all unnested elementwise emitters. (details)
  51. Switch from explicit argument for inlining functions post TF -> HLO legalization to checking device type (NFC). (details)
  52. Remove changes made to support TFRT-based OpKernel classes in Conv3d kernel. (details)
  53. Fixed backward reference error of tflite_custom_android_library (details)
  54. Add lowering from tf.RiscAdd op to HLO_Add. (details)
  55. Temporarily disable a failing test. (details)
  56. Change data accessor to return const ref to avoid copy. (details)
  57. Add hybrid BatchMatMul kernel that supports legacy symmetric_quantize_inputs. (details)
  58. [XLA:GPU] Eliminate tuple population from batch norm thunks (details)
  59. Small doc-string fixes for tf.constant. (details)
  60. Add CompileGraphToXlaBuilder to compile_mlir_util.h. Refactor internal functions to support both the new CompileGraphToXlaBuilder and the existing CompileGraphToHlo. (details)
  61. [tf.data] Turn off the experiment `map_parallelization` currently. (details)
  62. Provide a way to keep temporary tensors (details)
  63. Remove unused include. (details)
  64. [XLA] [Docs] Minor update to jit_compile=True docs (details)
  65. [NFC] [TF2XLA] Reduce log spam: no need to say every time that we are not creating XLA devices (details)
  66. [XLA] [Docs] Further clarify nesting jit_compile behavior (details)
  67. Add `GetLastUserFrame` functionality to tf_stack (details)
  68. [TF2XLA] Show Python stack traces for failed XLA compilations (details)
  69. Add virtual memory management function wrappers to GpuDriver. (details)
  70. [XLA/GPU] Migrate all unnested elementwise emitters. (details)
  71. [HLO] Add a pattern for HLO ConstOp to HLO -> Linalg conversion. (details)
  72. Use OpState::operator->() to get to member functions in Operation so we can remove the corresponding methods from OpState. (details)
  73. [XLA/GPU] Migrate all unnested elementwise emitters. (details)
  74. Print node name in GatherNdSlice error msg. (details)
  75. Update GraphDef version to 611. (details)
  76. compat: Update forward compatibility horizon to 2020-12-10 (details)
  77. Bug fix for binary op kernel_gen testing (details)
  78. Add MaxUnpooling2D custom op to TFLite (details)
  79. Use better test names for gpu_binary_ops_test. (details)
  80. Add a config modifier hook to xla::RunAndCompare for internal use. (details)
  81. [XLA:GPU] Document the GpuMultiOutputFusion in its header file. (details)