0% found this document useful (0 votes)
476 views

LoCoML

LoCoML is a low-code framework designed to facilitate the integration of diverse machine learning models within the Bhashini Project, which aims to enhance communication across over 20 languages using AI-driven language technologies. The framework addresses challenges in managing heterogeneous models by providing a Model Hub and Pipeline Orchestrator, allowing users with varied technical backgrounds to create and manage ML inference pipelines efficiently. Preliminary evaluations indicate that LoCoML adds minimal computational overhead, making it a practical solution for large-scale ML integration.

Uploaded by

RafiullahOmar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
476 views

LoCoML

LoCoML is a low-code framework designed to facilitate the integration of diverse machine learning models within the Bhashini Project, which aims to enhance communication across over 20 languages using AI-driven language technologies. The framework addresses challenges in managing heterogeneous models by providing a Model Hub and Pipeline Orchestrator, allowing users with varied technical backgrounds to create and manage ML inference pipelines efficiently. Preliminary evaluations indicate that LoCoML adds minimal computational overhead, making it a practical solution for large-scale ML integration.

Uploaded by

RafiullahOmar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

LoCoML:

A Framework for Real-World ML Inference Pipelines


Kritin Maddireddy† , Santhosh Kotekal Methukula† , Chandrasekar Sridhar† , Karthik Vaidhyanathan†
Software Engineering Research Center, IIIT Hyderabad, India
[email protected], [email protected], [email protected],
[email protected]

Abstract—The widespread adoption of machine learning (ML)


has brought forth diverse models with varying architectures, data
arXiv:2501.14165v1 [cs.SE] 24 Jan 2025

requirements, introducing new challenges in integrating these


systems into real-world applications. Traditional solutions often
struggle to manage the complexities of connecting heterogeneous
models, especially when dealing with varied technical specifications.
These limitations are amplified in large-scale, collaborative projects
where stakeholders contribute models with different technical
specifications. To address these challenges, we developed LoCoML,
a low-code framework designed to simplify the integration of diverse
ML models within the context of the Bhashini Project - a large-scale
initiative aimed at integrating AI-driven language technologies such
as automatic speech recognition, machine translation, text-to-speech,
and optical character recognition to support seamless communica-
tion across more than 20 languages. Initial evaluations show that
LoCoML adds only a small amount of computational load, making it
efficient and effective for large-scale ML integration. Our practical Fig. 1: Overview of LoCoML Framework
insights show that a low-code approach can be a practical solution
for connecting multiple ML models in a collaborative environment.
Index Terms—Low Code for ML systems, Inference Pipelines,
Low code Pipelines, MDE4ML aligned inferences has highlighted gaps in current frameworks,
necessitating a customized solution. Many existing solutions
I. I NTRODUCTION impose limitations in compatibility and scalability, making it
challenging to support workflows where independently developed
Integrating machine learning (ML) systems into complex,
models must work together within a single pipeline [5], [6], [10].
real-world applications brings engineering challenges that
go beyond traditional software engineering challenges [1], The need for such a flexible, accessible solution is particularly
[2]. ML-based systems require continuous data management, evident in workflows involving a series of interdependent models,
frequent model updates, and robust workflows to link various such as speech-to-text processing. In this domain, one model
ML components, which are often sourced from different might transcribe speech to text, another handles language transla-
providers with varying technical compatibility [3]–[6]. These tion, and further models manage additional language-specific nu-
challenges are especially significant in large-scale, collaborative ances. Integrating these models into a single, reliable pipeline that
projects. We were faced with one such challenge in the Bhashini ensures high-quality output at each stage is a complex engineering
Project 1 , which had multiple academic/industrial partners task, further complicated by the lack of frameworks that support
contribute models with distinct architectures and formats towards seamless integration across diverse models with specialized
building a nationwide AI platform. Ensuring that different functions. To address these real-world demands, LoCoML was
models work smoothly as part of a unified, high-quality system developed as a low-code ML engineering framework, providing
demands considerable engineering effort [7]–[9]. flexibility and modularity in creating and managing ML inference
While some ML platforms provide pipeline management pipelines. To address the challenges of managing and orchestrat-
and model deployment tools, they often lack the flexibility ing ML models in complex workflows, LoCoML (As in Figure 1)
required for projects with the scale and diversity of Bhashini is organized around two main components: the Model Hub and
Project. Many existing platforms also require high levels of the Pipeline Orchestrator. Together, these parts handle everything
coding expertise, creating a barrier for teams with varied from storing and retrieving models to executing pipelines, with
technical backgrounds [9]. This restricts accessibility and different user roles involved to keep things running smoothly.
slows development, as users must rely on expert developers for Inspired by model-driven engineering (MDE) [11] practices
even minor adjustments. In the Bhashini Project, the need to and supported by low-code principles, LoCoML abstracts
coordinate multiple, heterogeneous models to achieve accurate, technical complexities, allowing users to focus on the application
logic rather than the underlying engineering details. Additionally,
† These authors contributed equally to this work. the framework’s low-code design invites users with diverse
1 https://bhashini.gov.in/ technical backgrounds to contribute to ML workflows, improving
accessibility and reducing the time and expertise required to within the pipelines, which are stored in and retrieved from the
develop robust ML systems [4], [9], [12]. Model Hub. The other two roles, the Pipeline Designer and the
This paper presents LoCoML’s role within the Bhashini Pipeline User, engage directly with the Pipeline Orchestrator.
Project, demonstrating practical solutions for the complex
requirements of large-scale, multi-component ML systems and A. The Model Hub
providing insights that can inform the development of similar
applications. Through this experience, we aim to share actionable
strategies for addressing complex ML engineering challenges
and illustrate the effectiveness of MDE and low-code approaches
in demanding environments. The remainder of the paper is
structured as follows. Section II provides background on Bhashini
Project and explains its role as a case study. Section III describes
the LoCoML framework and its core components. Section IV,
presents the preliminary results achieved with the framework.
A review of related work is in Section V, and Section VI
concludes with summary and future research directions.
II. B HASHINI P ROJECT
The Bhashini Project is a large-scale initiative focused on
breaking down language barriers by enabling digital services in
multiple languages. The project integrates various AI-driven lan-
guage technologies, including Text-to-Speech (TTS), Automatic Fig. 2: The Model Hub
Speech Recognition (ASR), Machine Translation (MT), and
Optical Character Recognition (OCR). By combining these tools, The Model Hub supplies the system with all the necessary
Bhashini Project facilitates seamless communication across more ML models. The Model Developer uses this subsystem in one
than 20 languages, providing a unified platform that allows people of two ways: either by training an ML model from scratch via
to access and interact with content in their preferred language. the Model Trainer and saving the trained model in the Model
A range of stakeholders support this initiative, contributing Repository, or by providing an API to an externally deployed
resources, expertise, and data to expand its capabilities. Small model along with a usage mechanism, stored as part of Model
and medium enterprises (SMEs) and private organizations with APIs. Together, the Model Repository and Model APIs form
substantial digital reach offer technical support and data to the Saved Models database, which is essential for providing
enhance the project’s language resources. Additionally, local models to the Pipeline Orchestrator. In the Bhashini Project,
language organizations and individual users contribute through all the necessary models have already been developed and have
a crowdsourcing platform, enriching the language data and been made available for use via APIs. We are leveraging these
making the platform more representative. Together, these efforts APIs to build our inference pipelines.
create a collaborative system that addresses the diverse linguistic
needs of the public and supports the integration of advanced B. The Pipeline Orchestrator
language models into real-world applications.
The diversity and scale of the Bhashini Project created unique
challenges in integrating multiple AI-driven language models into
a single cohesive system. The need to coordinate between differ-
ent model architectures, manage dynamic data flows, and ensure
high-quality results across various languages required a flexible
and robust framework that could adapt to evolving requirements.
Motivated by these challenges, we developed LoCoML to stream-
line the integration and management of these diverse models.
LoCoML’s low-code design and modular approach address the
complexity of combining technologies like TTS, ASR, MT, and
OCR, making it possible for the Bhashini Project to deliver con-
sistent, reliable, and scalable language services. This framework
enables seamless collaboration across stakeholders, allowing
them to contribute and refine models effectively while enhancing
the platform’s accessibility and usability for a broader audience.
III. L O C O ML A PPROACH
ed in Figure 1, LoCoML co themprises system two primary
subsystems: the Model Hub and the Pipeline Orchestrator. Within
this system, we have identified three distinct user roles. The first is
the Model Developer, responsible for creating the models utilized Fig. 3: The Pipeline Orchestrator
The Pipeline Orchestrator is responsible for managing all ASR model processes the input audio to generate textual data,
inference-related processes within the system. At the core which is then linked to the input of a MT model. If a direct
of this subsystem is the Pipeline Designer, who uses the MT model translating from the input language to the desired
Pipeline Builder component to construct pipelines. A pipeline output language is unavailable in the Model Hub, multiple MT
is essentially a sequence of processing steps, each of which models can be chained together in sequence. For example, an
is represented by a node ni ∈ N ∀i ∈ {1,...,k} for a pipeline intermediate translation step might first convert the input text
with k steps. These nodes may include data pre-processing to a bridge language (e.g., English) before translating it into
components, various types of machine learning (ML) models, the target language. This chaining capability ensures flexibility
and specialized components known as adapters. The sequence in constructing pipelines for complex language translation tasks.
in which steps are to be executed is determined by the set of During pipeline construction, validation is managed by the
directed edges E, where each edge eni →nj i ̸= j denotes that Pipeline Validator—a component within the Pipeline Builder.
a directed edge from node ni to node nj exists in the pipeline. Each node ni ∈ N has a set of properties Pni associated
An adapter is a specialized node in the pipeline designed to with it, that are used to uniquely identify the node itself. We
ensure compatibility between nodes, especially when the output define a rule r to be a construct whose purpose is to check
format of one model doesn’t align with the input requirements of whether some properties of nodes ni and nj satisfy some
the next. They play a critical role when models handle different boolean constraint, as defined in Algorithm 1. Note that the
data types or structures. For instance, an OCR model might validateRuleConstraint function is a boolean function
output raw text that contains unrecognized or misinterpreted that is specific to whichever rule r is utilizing this function.
characters when processing an image, whereas a downstream MT
model requires clean, structured text as input. The adapter acts Algorithm 1 Rule evaluation procedure
as an intermediary, cleaning and reformatting the OCR output Require: Source Node ni , Destination Node nj
to meet the MT model’s requirements. This bridging function Ensure: Boolean indicating whether this rule has been satisfied
of adapters is essential for maintaining smooth data flow and by nodes ni and nj
reliable integration among various models within the pipeline, 1: procedure E VALUATE(ni ,nj )
ensuring that each model receives compatible, usable data. 2: if property px ∈ Pni and property py ∈ Pnj then
3: return validateRuleConstraint(ni → px , nj → py )
4: else
5: return false
6: end if
7: end procedure

As defined in Algorithm 2, the Pipeline Validator takes the


set of all these rules r ∈ R as the RuleSet, along with the source
node ni and the destination node nj , and verifies whether all
the rules present in the RuleSet are satisfied for a potential edge
between these nodes. If they are, then an edge eni →nj can exist
between them. Else, the user is notified about the invalid edge.

Algorithm 2 Pipeline Validator’s edge validation procedure


Require: Source Node ni , Destination Node nj , RuleSet R
Ensure: Boolean indicating if an edge eni →nj can exist
1: procedure C AN E DGE E XIST (ni ,nj ,R)
2: for each rule r ∈ R do
Fig. 4: The Pipeline Builder 3: if not r.evaluate(ni ,nj ) then
4: return false
The Pipeline Builder consists of a Graph Builder, which the 5: end if
Pipeline Designer uses to create the graph, that is, the set of 6: end for
all nodes N and the set of edges E that define the pipeline’s 7: return true
structure. Model information, sourced from the Model Hub, is 8: end procedure
parsed by the Model Loader, which loads the model and makes
it available for use as a node within the pipeline. In the Bhashini Project, the Pipeline Validator enforces
For instance, in the case of speech-to-text processing as compatibility rules between model connections to ensure a
mentioned in Section I, the Pipeline Designer utilizes the Graph logical workflow. Specifically, it verifies whether the output
Builder to construct the pipeline step by step. The process of an ASR model is linked to an MT or TTS model’s input,
begins by connecting an input node, which is a specialized an OCR model’s output is connected to an MT or TTS model’s
node in the pipeline designed to serve as the entry point for input, the output of an MT model is connected to another
data, (the user can either upload a dataset, or they can provide a MT or TTS model’s input, and the output of a TTS model is
link to an external dataset to this node) to an ASR model. The connected to an ASR model’s input. Furthermore, it also checks
whether the model supports the chosen target-source language one stage to the next. Upon completing the entire pipeline, the
combination as selected in the nodes. final inference output is returned by the Pipeline Executor to the
For example, the validateRuleConstraint function, Pipeline User, providing them with the results of the execution.
when verifying whether an OCR model can be connected to In the Bhashini Project, the Inference Zoo serves as the fron-
an MT or TTS model, first checks if the source node (ni ) is an tend interface for the Pipeline Repository, showcasing all avail-
OCR model. If this condition is met, the function examines the able saved pipelines. Users can browse the Inference Zoo, select
type of the target node (nj ). If the target node is identified as a desired pipeline for execution, and obtain its corresponding API
either an MT or TTS model, the function further validates the endpoint (generated at the time the pipeline is saved). By sending
compatibility of the output language from the OCR model and input data to this endpoint, the Pipeline Executor processes the re-
the input language required by the MT or TTS model. Only if quest, executes the selected pipeline, and returns the output to the
this language pairing is valid for the chosen MT/TTS model, the user, streamlining the interaction between users and the system.
function returns true, indicating the edge is valid. Otherwise,
it returns false, marking the edge as invalid. If the source IV. P RELIMINARY E VALUATION
node is not an OCR model, this rule does not apply, and the As a preliminary step, experiments were conducted across
function defaults to returning true, as the Pipeline Validator two distinct machine learning pipeline scenarios. We aimed to
ensures that all the rules are satisfied for the given edge. So, measure the additional overhead in running the pipeline that was
any rules that are not applicable must default to true. introduced by the LoCoML platform2 , on top of the model execu-
tion time, in order to demonstrate the overhead’s relative impact.
After a pipeline has been created and validated, the Pipeline
Designer can request the Pipeline Builder to save it in the A. Experimental Setup
Pipeline Repository, which stores all completed pipelines.
We deploy the backend of the LoCoML framework locally in
Alternatively, the Pipeline Designer can choose to test the
a Docker container, using the Python:3.12-slim base image for
pipeline without saving by directing the Pipeline Builder to
the execution environment, on a laptop with the Ryzen 7 5800H
send it directly to the Pipeline Executor. In this case, the
CPU that runs at 3.2GHz base speed, and 16GB of DDR4
Pipeline Executor executes the pipeline without retrieving its
RAM clocked at 3200MHz. Further, we evaluate LoCoML
details from the Pipeline Repository.
using two different scenarios:
1) Machine Translation Pipeline: We tested pipelines
containing 1 to 16 MT models.
2) Speech Processing Pipeline: Pipelines cannot consist of
only ASR or only TTS nodes because of input-output
mismatches—ASR models take audio as input and produce
text, while TTS models take text as input and produce audio.
Therefore, we tested pipelines with 1 to 8 pairs of ASR and
TTS models chained together.
B. Performance Analysis

(a) Performance Analysis for MT Task


# Models Total Runtime(ms) Model Runtime(ms) Additional Overhead(ms)
1 3019.999 2967.429 52.571
2 6537.504 6431.963 105.541
3 8805.692 8641.812 163.880
Fig. 5: A sequence diagram of the pipeline execution process 4 11249.154 11045.455 203.698
6 17639.823 17354.786 285.037
8 23706.186 23286.671 419.514
12 38774.544 38170.085 604.459
The Pipeline User initiates a pipeline execution request (refer 16 46882.215 46036.834 845.381
Figure 5), which can be done through various methods, such
as sending an API request to a designated endpoint or using
a graphical interface. Regardless of the method, the specified (b) Performance Analysis for ASR + TTS Task
# Pairs Total Runtime(ms) Model Runtime(ms) Additional Overhead(ms)
pipeline is retrieved from the Pipeline Repository, and the 1 21250.959 21144.014 106.945
model details are loaded from the Model Hub. 2 36543.591 36350.016 193.576
4 71924.239 71503.728 420.511
After all the nodes and edges of the pipeline have been loaded 6 105187.744 104544.243 643.502
8 141696.802 140738.753 958.050
up and are in place, the Pipeline Executor plays a central role
in handling runtime operations, where it receives the validated TABLE I: Performance Comparison of MT and ASR + TTS Tasks
pipeline—either from the Pipeline Repository or directly from the
Pipeline Builder as mentioned earlier, for inference tasks. Post the Figure 6 demonstrates a pipeline constructed using the drag
pipeline initiation the Pipeline Executor manages data flow across and drop interface of the LoCoML platform integrated into the
nodes, orchestrating each component’s execution based on the de- Bhashini project. Further, Table I(a) shows the performance
fined workflow. During inference, the Pipeline Executor processes analysis for MT task, while Table I(b) shows the performance
data inputs through each node, including pre-processing stages,
ML models, and adapters, ensuring seamless transitions from 2 https://anonymous.4open.science/r/XYZ-Project
Fig. 6: An inference pipeline built using the LoCoML framework

analysis for ASR + TTS task, and how it scales as the number of Machine Learning Designer enables users to create pipelines
models increases. Our initial results demonstrate that LoCoML’s via a drag-and-drop interface, integrating seamlessly within
overhead increases linearly with the number of models, scaling Azure’s ecosystem. Similarly, AWS SageMaker Pipelines
from 52.57ms for a single MT model to 845.38ms for 16 offers a complete suite for creating, deploying, and managing
models, while remaining negligible at only 1.8% of the total workflows, tightly coupled with AWS-native services. However,
runtime. This linear overhead growth, coupled with the fact these platforms often face challenges when dealing with custom
that 98.2% of the execution time is spent on model inference, models from external sources, primarily because they are largely
indicates that our platform introduces minimal performance designed to operate within their respective ecosystems [22]
impact while effectively managing complex ML pipelines. [23]. It is too tedious to have to port custom models into the
specific input-output constraints as prescribed by these platforms.
V. R ELATED W ORK In contrast, our framework addresses this gap by providing
Recent advances in MDE have highlighted the role of low- a unified and adaptable solution capable of accommodating
code platforms in simplifying ML development and deployment. diverse, partner-contributed models, ensuring compatibility and
Naveed et al. [5] recommend researchers and practitioners to seamless integration—capabilities that are currently absent in
develop low-code platforms for systems with ML components to these existing platforms.
make ML capabilities more accessible to non-experts, as these
platforms can significantly reduce development complexity and VI. C ONCLUSION AND F UTURE W ORK
time to deployment. Similarly, Iyer et al. [9] introduced Trinity, a
no-code platform specifically designed to handle complex spatial To conclude, we have introduced LoCoML, a low code
datasets, highlighting the versatility and scalability low-code framework designed to streamline the development of ML
solutions bring to ML applications. In addition, Esposito et inference pipelines. LoCoML has been successfully integrated
al. [13] emphasize the importance of user-configurable controls into the Bhashini Project with a drag-and-drop interface to
within AI systems, proposing that a balance between automation create pipelines, where it operates in a real-world production
and manual adjustments can address diverse user needs environment, supporting users in building and managing inference
effectively. Sahay et al. [14] provide a detailed survey of various pipelines efficiently. The framework’s simple interface allows
low-code development platforms, identifying key features such users, including those without extensive coding skills, to connect
as graphical interfaces, interoperability, and scalability as critical and control various ML models seamlessly. Our evaluation across
for decision-makers evaluating such platforms. multiple scenarios, including TTS, MT, and ASR, indicate that
LoCoML builds on these principles, offering a flexible, LoCoML has significantly simplified the process of constructing
user-friendly platform that enables non-expert users to construct complex, multimodel workflows, making ML pipeline develop-
and customize ML pipelines, facilitating tasks like data ment more accessible and practical for a diverse range of users.
preprocessing, model training, and inference without extensive Looking ahead, we aim to expand LoCoML’s capabilities in
coding knowledge [15]. This approach empowers users to response to evolving requirements within the Bhashini Project.
iteratively reconfigure workflows, bridging adaptability gaps Stakeholders of this project are exploring the potential of
noted in previous studies and enhancing both accessibility and extending the framework to support model training, thus creating
control [16] [17]. Unlike traditional ML systems, LoCoML an end-to-end solution covering both training and inference within
allows users to adjust pipelines dynamically, aligning with the same pipeline. Additionally, we plan to conduct further studies
recommendations for integrating user-centric features and to assess LoCoML’s effectiveness in terms of user experience,
configurable controls within ML platforms [18] [19]. usability, and performance. These future enhancements will
Existing platforms like Azure Machine Learning Designer [20] ensure that LoCoML continues to evolve as a versatile and robust
and AWS SageMaker Pipelines [21] provide robust solutions framework, meeting the growing demands of ML practitioners
for building and managing machine learning workflows. Azure and researchers within the Bhashini Project and beyond.
ACKNOWLEDGMENT [21] Amazon, “Amazon SageMaker Introduction.” https://docs.aws.amazon.
com/sagemaker/latest/dg/use-auto-ml.html?icmpid=docs sagemaker lp/
The authors acknowledge the anonymous reviewers for index.html, 2024. Accessed: 2024-11-16.
their valuable feedback. The authors thank Ayush Agrawal, [22] Microsoft, “How to Deploy Models in Azure Machine Learning
Designer.” https://learn.microsoft.com/en-us/azure/machine-learning/
Harshit Karwal, Mukta Chanda, Rohan Chowdary, Shashwat how-to-deploy-model-designer?view=azureml-api-1, 2024. Accessed:
Dash, Siddharth Mavani, and Supreeeth S Karan, for their 2024-11-16.
assistance in developing the code artifacts necessary to build [23] Amazon, “Bring Your Own Model with Amazon SageMaker
Script Mode.” https://aws.amazon.com/blogs/machine-learning/
this framework. The authors would also like to acknowledge bring-your-own-model-with-amazon-sagemaker-script-mode/, 2021.
Bhashini Engineering Unit 3 team for the support. Accessed: 2024-11-16.

R EFERENCES
[1] G. A. Lewis, H. Muccini, I. Ozkaya, K. Vaidhyanathan, R. Weiss, and
L. Zhu, “Software architecture and machine learning (dagstuhl seminar
23302),” 2024.
[2] S. Amershi, A. Begel, C. Bird, R. Deline, H. Gall, E. Kamar, N. Nagappan,
B. Nushi, and T. Zimmermann, “Software engineering for machine
learning: A case study,” pp. 291–300, 05 2019.
[3] H. Muccini and K. Vaidhyanathan, “Software architecture for ml-based
systems: What exists and what lies ahead,” in 2021 IEEE/ACM 1st
Workshop on AI Engineering-Software Engineering for AI (WAIN),
pp. 121–128, IEEE, 2021.
[4] M. Brambilla, J. Cabot, and M. Wimmer, Model-driven software
engineering in practice. Morgan & Claypool Publishers, 2017.
[5] H. Naveed, C. Arora, H. Khalajzadeh, J. Grundy, and O. Haggag, “Model
driven engineering for machine learning components: A systematic
literature review,” Information and Software Technology, p. 107423, 2024.
[6] D. Kreuzberger, N. Kühl, and S. Hirschl, “Machine learning operations
(mlops): Overview, definition, and architecture,” 2022.
[7] A. C. Bock and U. Frank, “Low-code platform,” Business & Information
Systems Engineering, vol. 63, pp. 733–740, 2021.
[8] J. Cabot, “Positioning of the low-code movement within the field of
model-driven engineering,” in Proceedings of the 23rd ACM/IEEE
International Conference on Model Driven Engineering Languages and
Systems: Companion Proceedings, pp. 1–3, 2020.
[9] C. V. K. Iyer, F. Hou, H. Wang, Y. Wang, K. Oh, S. Ganguli, and V. Pandey,
“Trinity: A no-code ai platform for complex spatial datasets,” 2021.
[10] G. A. Lewis, I. Ozkaya, and X. Xu, “Software architecture challenges
for ml systems,” in 2021 IEEE International Conference on Software
Maintenance and Evolution (ICSME), pp. 634–638, IEEE, 2021.
[11] D. C. Schmidt et al., “Model-driven engineering,” Computer-IEEE
Computer Society-, vol. 39, no. 2, p. 25, 2006.
[12] C. Di Sipio, D. Di Ruscio, and P. T. Nguyen, “Democratizing the develop-
ment of recommender systems by means of low-code platforms,” in Pro-
ceedings of the 23rd ACM/IEEE international conference on model driven
engineering languages and systems: companion proceedings, pp. 1–9, 2020.
[13] A. Esposito, M. Calvano, A. Curci, G. Desolda, R. Lanzilotti, C. Lorusso,
and A. Piccinno, “End-user development for artificial intelligence: A
systematic literature review,” in International Symposium on End User
Development, pp. 19–34, Springer, 2023.
[14] A. Sahay, A. Indamutsa, D. Di Ruscio, and A. Pierantonio, “Supporting
the understanding and comparison of low-code development platforms,” in
2020 46th Euromicro Conference on Software Engineering and Advanced
Applications (SEAA), pp. 171–178, IEEE, 2020.
[15] J. Bosch, H. H. Olsson, and I. Crnkovic, “Engineering ai systems: A
research agenda,” Artificial intelligence paradigms for smart cyber-physical
systems, pp. 1–19, 2021.
[16] D. De Silva and D. Alahakoon, “An artificial intelligence life cycle: From
conception to production,” Patterns, vol. 3, no. 6, 2022.
[17] G. Giray, “A software engineering perspective on engineering machine
learning systems: State of the art and challenges,” Journal of Systems
and Software, vol. 180, p. 111031, 2021.
[18] M. Steidl, M. Felderer, and R. Ramler, “The pipeline for the continuous
development of artificial intelligence models—current state of research
and practice,” Journal of Systems and Software, vol. 199, p. 111615, 2023.
[19] D. Xin, E. Y. Wu, D. J.-L. Lee, N. Salehi, and A. Parameswaran, “Whither
automl? understanding the role of automation in machine learning
workflows,” in Proceedings of the 2021 CHI Conference on Human
Factors in Computing Systems, pp. 1–16, 2021.
[20] Microsoft, “Azure Machine Learning Designer (v2) Introduction.”
https://learn.microsoft.com/en-us/azure/machine-learning/
concept-designer?view=azureml-api-2, 2024. Accessed: 2024-11-16.

3 https://bhashini.gov.in/sahyogi/anushandhan-mitra/15

You might also like