Madhusudan Bokku
Madhusudan Bokku
Email Id [email protected]
List top three key skills as per the role Data Science, Python, SQL, Predictive Models, Prescriptive Models.
Good
Communication skills [Good/Average]
Actively Interviewing Elsewhere? Yes
Professional Summary
13+ years of IT experience, Senior Data scientist with 5+ years of broad-based experience in building data-
intensive applications, overcoming complex architectural, and scalability issues in diverse industries.
Proficient in predictive modeling data processing and data mining algorithms, as well as coding languages,
including Python, Java. Capable of creating, developing, testing and deploying highly adaptive diverse
services to translate business and functional qualifications into substantial deliverables.
Work History
• Develop action plans to mitigate risks in action decision making while increasing profitability by learning data
science.
• Drive the interaction and partnership between the managers to ensure active cooperation in identifying as
well as defining analytical needs, and generating the pull-through of insights with the business.
• Develop algorithms using natural language processing and deep learning models in predictive maintenance.
• Design algorithms to track and detect anomalies in multiple sensors data for the integrity industry.
• Demonstrate knowledge and execution of application programming interface development and test
automation.
• Worked on several standard python packages like boto3, Pandas, Numpy, Scipy, wxPython, PyTables etc.
• Expertise in working with different databases like Microsoft SQL Server, Oracle, MySQL, PostgreSQL and Good
knowledge in using NoSQL databases MongoDB
• Excellent working knowledge in UNIX and Linux shell environments using command line utilities.
• Good Experience in working with Jenkins and Artifactory for continuous integration and deployment.
• Expert at version control systems like Git, GitHub, and SVN. Migrated repos from SVN to GitHub.
• Knowledge in WAMP (Windows, Apache, MYSQL and Python) and LAMP (Linux, Apache, MySQL and Python)
Architecture’s.
• Experience building Airflow at an environmental level and building dags.
• Experience creating Jenkins jobs for CI/CD pipeline and usage of artifactory.
• Exposed in working with Spark Data Frames and optimized the SLA’s.
• Worked on the back-end using Scala 2.12.0 and Spark 2.0.2 to perform several aggregation logics.
• Good experience in handling errors/exceptions and debugging the issues in large scale applications.
• Extensive usage of IDE tools like Pycharm and Eclipse, and development tools like git, Bit bucket, SVN.
• Hands on Experience in Data mining and Data warehousing using ETL Tools.
• Experience with Agile, Scrum and Waterfall methodologies. Used ticketing systems like Jira, Service Now and
other proprietary tools.
PROFESSIONAL EXPERIENCE
Company Name: Adactin From: Apr-2022 - To: Till Date
Duration 14 months
Role Data Scientist Team Size 5
Responsibilities:
• Reporting and reviewing the defects being posted by the team members. Work with
stakeholders to determine how to use business data for valuable business solutions
• Created 2 apps that classified multi million transactions into genuine and fit the models and
plotted performance curves
• Used R with algorithms such as decision Trees, Logistic Regression, Artificial Neural
Networks and Gradient Boosting Classifier
• Created application in R against 6 transaction databases
• Search for ways to get new data sources and assess their accuracy
• Browse and analyze enterprise databases to simplify and improve product development,
marketing techniques, and business processes
• Create custom data models and algorithms
• Use predictive models to improve customer experience, ad targeting, revenue generation,
and more
• Develop the organization’s test model quality and A/B testing framework
• Coordinate with various technical/functional teams to implement models and monitor
results
• Develop processes, techniques, and tools to analyze and monitor model performance while
ensuring data accuracy
• Developed Airflow Bigdata jobs to load large volume of data into S3 data lake and then into
Snowflake.
• Working with AWS stack S3, EC2, EMR, Athena, Glue, Redshift, DynamoDB, IAM, and
Lambda.
• Responsible for design and development of Spark SQL code based on Functional
Specifications
• Involved in Migrating Objects from Teradata, SQL Server to S3 and then on Snowflake.
• Developed highly optimized Spark applications to perform data cleansing, validation,
transformation, and summarization activities
• Experienced in building and architecting multiple Data pipelines, end to end ETL and ELT
process for Data Ingestion and transformation in AWS and Spark.
• Fine-tuning spark applications/jobs to improve the efficiency and overall processing time
for the pipelines.
• Involved in continuous Integration of applications using Jenkins.
• Built pipeline to ingest data from google sheet to AWS S3 using python API and AWS Boto3
• Working on migrating Data pipelines to in-house MAP-AIRFLOW from legacy Airflow cluster
• Created DDL using TQL to create pin boards for executives and SMEs on Thought Spot.
• Heavily involved in testing Snowflake to understand best possible way to use the cloud
resources.
• Strong experience in writing code using Python API, PySpark API, and Spark API for
analyzing the data.
• Communicate with peers and supervisors routinely, document work, meetings, and
decisions.
Company Name: Atos - Syntel pvt Ltd. From: Jan 2018-To: Dec-2018
Company Name: Atos - Syntel pvt Ltd From: Nov 2015-To: Jan 2018
Company Name: CSC India Pvt Ltd. From: Feb 2012 -To: Nov 2014
Company Name: CSC India Pvt Ltd. From: Jul 2008 -To: Jan 2010
Education
MCA, Computers, 2006
B.Sc., Computers, 2000
Personal Details
Gender Male
Nationality Indian
Passport No P7373708
Mailing Address Hyderabad
Contact No. 8639815114
I hereby declare that the information that I have furnished is authentic, and true to the best of my
knowledge.