In the 5G era, various industries (service providers, enterprises, OTTs and public sectors) are working on open innovation based on open source in many areas. While Some 5G mobile software venders are implementing 5G UPF with FPGA in OpenShift/Kuberntes with Device Plugin, an new network start-up - Kaloom announced Cloud Edge switch fabric that can integrate UPF into P4 enabled Software Defined Fabric (SDF) connected to OpenShift container integrated platform. In the course of various ideas, this session introduced the latest trends of SDF among in OpenShift native infrastructure and discussed the future of data plane and 5G UPF.
DLLAB Engineer Days: 推論環境としての Windows ML x ONNX の実際Daiyu Hatakeyama
推論環境の作成を大幅に簡素化してくれる選択肢の一つとして、Windows ML は外せません。その Windows ML は ONNX 経由で連携する事になります。このセッションでは、End-To-Endで個別作成したモデルの ONNX 化。そして、その Windows ML アプリケーションへの組み込みを、既存のサンプルコードを題材にHackしていきます。
NTTコミュニケーションズでは、Azure Stack Hub with GPUを先行で導入し検証を行っています。本資料では、実際に利用している立場からデモを交えつつAzure Stack Hub with GPUのユースケースをお話すると共に、GPUのベンチマークを含む他社クラウドとの性能比較結果について情報共有をいたします。
In the 5G era, various industries (service providers, enterprises, OTTs and public sectors) are working on open innovation based on open source in many areas. While Some 5G mobile software venders are implementing 5G UPF with FPGA in OpenShift/Kuberntes with Device Plugin, an new network start-up - Kaloom announced Cloud Edge switch fabric that can integrate UPF into P4 enabled Software Defined Fabric (SDF) connected to OpenShift container integrated platform. In the course of various ideas, this session introduced the latest trends of SDF among in OpenShift native infrastructure and discussed the future of data plane and 5G UPF.
DLLAB Engineer Days: 推論環境としての Windows ML x ONNX の実際Daiyu Hatakeyama
推論環境の作成を大幅に簡素化してくれる選択肢の一つとして、Windows ML は外せません。その Windows ML は ONNX 経由で連携する事になります。このセッションでは、End-To-Endで個別作成したモデルの ONNX 化。そして、その Windows ML アプリケーションへの組み込みを、既存のサンプルコードを題材にHackしていきます。
NTTコミュニケーションズでは、Azure Stack Hub with GPUを先行で導入し検証を行っています。本資料では、実際に利用している立場からデモを交えつつAzure Stack Hub with GPUのユースケースをお話すると共に、GPUのベンチマークを含む他社クラウドとの性能比較結果について情報共有をいたします。
This document discusses 5G and multi-access edge computing (MEC). The key points are: 1) 5G can achieve latency of 100ms while 4G is 300ms, and 5G bandwidth is 20Gbps compared to 4G's 1.29Gbps; 2) MEC deployed close to users on 5G can achieve even lower latency of under 10ms; 3) MEC integrated with 5G can enable new applications for IoT, VR/AR with high speed and low latency.
NTT Docomo's Challenge looking ahead the world pf 5G × OpenStack - OpenStack最...VirtualTech Japan Inc.
タイトル:NTT Docomo's Challenge looking ahead the world pf 5G × OpenStack
アジェンダ:
- Current Challenge
-- DOCOMO Cloud Platform
-- BizDevOps
- Challenge for the future
-- DOCOMO 5G Open Cloud
-- Next Challenge
Here are the key points from the AT&T presentation on their "Network AI" framework:
- AT&T is developing an open source framework called "Network AI" to drive their software-defined network transformation.
- The goal is to apply AI/machine learning techniques to continuously optimize their network performance. This will be done by collecting massive amounts of network data and using it to train ML models.
- As part of this effort, AT&T is contributing several open source projects to the Linux Foundation like Airship, Akraino, and Acumos. Airship provides tools for deploying OpenStack and Kubernetes on the edge, while Akraino is an edge computing framework. Acumos allows for developing and
Juju is a tool for deploying applications on public clouds, private clouds, and bare metal servers. It uses models to deploy applications across machines, with each model representing a separate environment. Juju charms define how to deploy and configure applications, and bundles define full multi-machine application topologies to deploy with Juju.
This document discusses using Juju and Kubernetes to deploy containerized applications on GPU-enabled infrastructure. It provides YAML examples for creating Kubernetes pods that utilize NVIDIA GPU resources and deploying Chainer and TensorFlow containers with GPU support. Commands are given for interacting with the Kubernetes cluster through kubectl to view nodes, create and delete pods, and execute commands on pods.
This study aims to develop an interactive idea-generation support system that enables users to consider the potential side effects of realizing new ideas.
In idea generation, confirmation bias often leads to an excessive focus on ``convenience,'' which can result in the oversight of unintended consequences, referred to as the ``side effects of convenience.''
To address this, we explored methods to alleviate user biases and expand perspectives through system-supported dialogue, facilitating a broader consideration of potential side effects.
The proposed system employs a stepwise idea-generation process supported by large language models (LLMs), enabling users to refine their ideas interactively.
By dividing the ideation process into distinct stages, the system mitigates biases at each stage while promoting ideas' concretization and identifying side effects through visually supported dialogues.
Preliminary evaluation suggests that engaging with the proposed system fosters awareness of diverse perspectives on potential side effects and facilitates the generation of ideas that proactively address these issues.
論文紹介:「Amodal Completion via Progressive Mixed Context Diffusion」「Amodal Insta...Toru Tamaki
Katherine Xu, Lingzhi Zhang, Jianbo Shi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, "Amodal Completion via Progressive Mixed Context Diffusion"CVPR2024
https://openaccess.thecvf.com/content/CVPR2024/html/Xu_Amodal_Completion_via_Progressive_Mixed_Context_Diffusion_CVPR_2024_paper.html
Minh Tran, Khoa Vo, Tri Nguyen, and Ngan Le,"Amodal Instance Segmentation with Diffusion Shape Prior Estimation"ACCV 2024
https://uark-aicv.github.io/AISDiff/