0% found this document useful (0 votes)
56 views8 pages

Madhusudan Bokku

Uploaded by

jakixen275
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views8 pages

Madhusudan Bokku

Uploaded by

jakixen275
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

“Applications Dev & Test - Data Scientist - 132731-1”

Candidate Name [First & Last Legal Name as


per Govt. Records] Madhusudanreddy Bokku

Contact Number 8639815114

Email Id [email protected]

Candidate LinkedIn Profile URL (Link) NA


Previous Employer Name & Designation JPMC (Payroll: Adactin), Data Scientists

Notice Period 30 Days negotiable up to 20 Days

Reason for the change Looking for better opportunity

Daily Bill Rate (INR)


Relevant/Significant Experience 8 Years

List top three key skills as per the role Data Science, Python, SQL, Predictive Models, Prescriptive Models.

Good
Communication skills [Good/Average]
Actively Interviewing Elsewhere? Yes

Holding Offer [YES/NO] No


Candidate Current Location Hyderabad

Willing to relocate Yes

Previous Microsoft experience? No


If Yes,
please select if the candidate is a Fulltime
Employee/Contingent Staff/Outsourced Staff?
Share previous work history records like –
• Microsoft Alias (email ID):
• Microsoft Personnel ID (Employee Number)
• Manager Details
• Microsoft Building Location & Corp-net Access
Details

Educational Degree, year of completion MCA, Computers, 2006

1. Madhusudhan has excellent experience in Data


Additional Information/Comments Science.
2. He has 8 years of experience in Python and 5+ years
of experience in SQL server.
3. He has excellent experience in predictive and
prescriptive models.
4. He has good experience in Machine learning, Deep
Learning, Artificial intelligence and Algorithm.
Madhusudanreddy Bokku
[email protected]
Sr Consultant
Mobile: +91 8639815114
linkedin.com/in/madhu-bokku-56988551

Professional Summary

13+ years of IT experience, Senior Data scientist with 5+ years of broad-based experience in building data-
intensive applications, overcoming complex architectural, and scalability issues in diverse industries.
Proficient in predictive modeling data processing and data mining algorithms, as well as coding languages,
including Python, Java. Capable of creating, developing, testing and deploying highly adaptive diverse
services to translate business and functional qualifications into substantial deliverables.

Work History
• Develop action plans to mitigate risks in action decision making while increasing profitability by learning data
science.
• Drive the interaction and partnership between the managers to ensure active cooperation in identifying as
well as defining analytical needs, and generating the pull-through of insights with the business.
• Develop algorithms using natural language processing and deep learning models in predictive maintenance.
• Design algorithms to track and detect anomalies in multiple sensors data for the integrity industry.
• Demonstrate knowledge and execution of application programming interface development and test
automation.
• Worked on several standard python packages like boto3, Pandas, Numpy, Scipy, wxPython, PyTables etc.
• Expertise in working with different databases like Microsoft SQL Server, Oracle, MySQL, PostgreSQL and Good
knowledge in using NoSQL databases MongoDB
• Excellent working knowledge in UNIX and Linux shell environments using command line utilities.
• Good Experience in working with Jenkins and Artifactory for continuous integration and deployment.
• Expert at version control systems like Git, GitHub, and SVN. Migrated repos from SVN to GitHub.
• Knowledge in WAMP (Windows, Apache, MYSQL and Python) and LAMP (Linux, Apache, MySQL and Python)
Architecture’s.
• Experience building Airflow at an environmental level and building dags.
• Experience creating Jenkins jobs for CI/CD pipeline and usage of artifactory.
• Exposed in working with Spark Data Frames and optimized the SLA’s.
• Worked on the back-end using Scala 2.12.0 and Spark 2.0.2 to perform several aggregation logics.
• Good experience in handling errors/exceptions and debugging the issues in large scale applications.
• Extensive usage of IDE tools like Pycharm and Eclipse, and development tools like git, Bit bucket, SVN.
• Hands on Experience in Data mining and Data warehousing using ETL Tools.
• Experience with Agile, Scrum and Waterfall methodologies. Used ticketing systems like Jira, Service Now and
other proprietary tools.

Python && Java Skills:

• Good experience on Java and Python capabilities.


TECHNICAL SKILLS

Tableau| SAS| Mat lab| Jupyter| NLTK| Microsoft Excel (Pivot


Tools
Tables) | Power BI | Jira|
Languages J2SE |Python| C#
Programming SAS(baseSAS and Macros), SQL
Programming Languages Python| R-programming| SQL| HTML| C| C++| Java|
Python Pandas| NumPy| Scikit-learn| Matplotlib| Pytorch|SciPy| NLTK|
Keras|
R - Libraries Ggplot| dplyr| tidyr| ggmap| mlr|
Data Analysis Data Mining| Data Visualizations| Exploratory Data Analysis|
Machine Learning Linear and Logistic Regression|K-means clustering|Decision Trees|
Naive Bayes|Classification|ANOVA|NLP|Text Analysis|TF-IDF|RNN|
Supervised Learning Linear and Logistics Regressions| decision Trees| Support Vector
Machines(SVM)
Unsupervised Learning k-means clusters| Principal Component Analysis(PCA)
Data Visualization Excel| Google sheet
Development tools Eclipse| Intellij| PyCharmIDE|
Domain Banking|Insurance|Health Care|
Python Libraries Boto3| Flask| OS| SQLAlchemy| Bootstrap| jQuery| Pandas| Numpy|
PySide| Scipy| wxPython| PyTables|
Databases Microsoft SQL Server| Oracle| MySQL| MS Access| MongoDb|
Cloud Technologies AWS (S3|CloudWatch|APIGateway|Step
Functions|RDS|CloudFormation|Lambda|EC2,SES,SNS,Batch)|
Big Data Hive| PIG| PySpark| Spark|
Version Controls Git| Github| Bit bucket|
Methodologies Agile| SCRUM and Waterfall|

PROFESSIONAL EXPERIENCE
Company Name: Adactin From: Apr-2022 - To: Till Date

Project Name Payment Connect


Client Name JPMC
Technology Python|Tableau|SAS|Matplotlib|Pyspark|NumPy|Jupyter|GitHub|Kafka
| Scala| ML Algorithms| Supervised, unsupervised learning techniques|

Duration 14 months
Role Data Scientist Team Size 5
Responsibilities:
• Reporting and reviewing the defects being posted by the team members. Work with
stakeholders to determine how to use business data for valuable business solutions
• Created 2 apps that classified multi million transactions into genuine and fit the models and
plotted performance curves
• Used R with algorithms such as decision Trees, Logistic Regression, Artificial Neural
Networks and Gradient Boosting Classifier
• Created application in R against 6 transaction databases
• Search for ways to get new data sources and assess their accuracy
• Browse and analyze enterprise databases to simplify and improve product development,
marketing techniques, and business processes
• Create custom data models and algorithms
• Use predictive models to improve customer experience, ad targeting, revenue generation,
and more
• Develop the organization’s test model quality and A/B testing framework
• Coordinate with various technical/functional teams to implement models and monitor
results
• Develop processes, techniques, and tools to analyze and monitor model performance while
ensuring data accuracy
• Developed Airflow Bigdata jobs to load large volume of data into S3 data lake and then into
Snowflake.
• Working with AWS stack S3, EC2, EMR, Athena, Glue, Redshift, DynamoDB, IAM, and
Lambda.
• Responsible for design and development of Spark SQL code based on Functional
Specifications
• Involved in Migrating Objects from Teradata, SQL Server to S3 and then on Snowflake.
• Developed highly optimized Spark applications to perform data cleansing, validation,
transformation, and summarization activities
• Experienced in building and architecting multiple Data pipelines, end to end ETL and ELT
process for Data Ingestion and transformation in AWS and Spark.
• Fine-tuning spark applications/jobs to improve the efficiency and overall processing time
for the pipelines.
• Involved in continuous Integration of applications using Jenkins.
• Built pipeline to ingest data from google sheet to AWS S3 using python API and AWS Boto3
• Working on migrating Data pipelines to in-house MAP-AIRFLOW from legacy Airflow cluster
• Created DDL using TQL to create pin boards for executives and SMEs on Thought Spot.
• Heavily involved in testing Snowflake to understand best possible way to use the cloud
resources.
• Strong experience in writing code using Python API, PySpark API, and Spark API for
analyzing the data.
• Communicate with peers and supervisors routinely, document work, meetings, and
decisions.

Company Name: Infosys (IDC). From: Jun-2021 -To: Apr-2022

Project Name Helios Application.


Client Name HSBC.
Technology Python|Tableau|SAS|Matlab|Jupyter|GitHub|SNOWFLAKE|
Duration 10 months.
Role Data Scientist. Team Size 3
Responsibilities:
• Deployed a recommendation engine to production to conditionally recommn other menu
items based on past history, increasing average order size by 7%.
• Implemented various time series forecasting techniques to predict surge in orders,
lowering customer wait by 10 minutes
• Designed a model in a pilot to increase incentives for drivers during peak hours, increasing
driver availability by 22%
• -Led a team of 3 data scientist to model the ordering process 5 unique ways, reported
results and made recommendations to increase order output by 9%
• Search for ways to get new data sources and assess their accuracy
• Browse and analyze enterprise databases to simplify and improve product development,
marketing techniques, and business processes
• Create custom data models and algorithms.
• Use predictive models to improve customer experience, ad targeting, revenue generation,
and more
• Develop the organization’s test model quality and A/B testing framework
• Coordinate with various technical/functional teams to implement models and monitor
results
• Develop processes, techniques, and tools to analyze and monitor model performance while
ensuring data accuracy

Company Name: Wells-Fargo. From: Dec-2019 - To: Apr-2021

Project Name Retirement.


Client Name Wells Fargo.
Technology Python|Tableau|SAS|Matlab|Jupyter|GitHub|
Duration 16 months
Role Data Scientist. Team Size 5
Responsibilities:
• Data mining or extracting usable data from valuable data sources
• Using machine learning tools to select features, create and optimize classifiers
• Carrying out preprocessing of structured and unstructured data
• Enhancing data collection procedures to include all relevant information for developing
analytic systems
• Processing, cleansing, and validating the integrity of data to be used for analysis
• Analyzing large amounts of information to find patterns and solutions
• Developing prediction systems and machine learning algorithms
• Presenting results in a clear manner
• Propose solutions and strategies to tackle business challenges
• Collaborate with Business and IT teams

Company Name: Atos - Syntel pvt Ltd. From: Jan 2018-To: Dec-2018

Project Name Humana Healthcare.


Client Name Humana Health Care Inc.
Technology Python|Tableau|SAS|Matlab|Jupyter|GitHub|
Duration 12 months
Role Data Scientist. Team Size 6
Responsibilities:
• Data mining or extracting usable data from valuable data sources
• Using machine learning tools to select features, create and optimize classifiers
• Carrying out preprocessing of structured and unstructured data
• Enhancing data collection procedures to include all relevant information for developing
analytic systems
• Processing, cleansing, and validating the integrity of data to be used for analysis
• Analyzing large amounts of information to find patterns and solutions
• Developing prediction systems and machine learning algorithms
• Presenting results in a clear manner
• Propose solutions and strategies to tackle business challenges
• Collaborate with Business and IT teams

Company Name: Atos - Syntel pvt Ltd From: Nov 2015-To: Jan 2018

Project Name iGO iPipeline


Client Name Transamerica
Technology Python|Git|GitHub|SNOWFLAKE|MySQL|AWS|Jenkins|Agile|Jira|
Duration 30 months
Role Data Engineer Team Size 6
Responsibilities:
• Data mining or extracting usable data from valuable data sources
• Using machine learning tools to select features, create and optimize classifiers
• Carrying out preprocessing of structured and unstructured data
• Enhancing data collection procedures to include all relevant information for developing
analytic systems
• Processing, cleansing, and validating the integrity of data to be used for analysis
• Analyzing large amounts of information to find patterns and solutions
• Developing prediction systems and machine learning algorithms
• Presenting results in a clear manner
• Propose solutions and strategies to tackle business challenges
• Collaborate with Business and IT teams

Company Name: CSC India Pvt Ltd. From: Feb 2012 -To: Nov 2014

Project Name Payments


Client Name Citi Bank
Technology Selenium Web driver with Java and Python | SQL | BDD Cucumber | Github |
Git |
Duration 2.9 years
Role QA Automation Developer Team Size 5
Responsibilities:
• Understanding the functional and Non-functional requirements.
• Designed the BDD Frame work.
• Debugging and execution. Created the whole test framework using Selenium for further
test creation and execution.
• Regression Test cases were written and automated using Selenium Web Driver.
• Documented the entire build and deployment process including detailed step-by-step
instructions
• Supported multiple parallel projects by creating processes & procedures for reusing
existing code
• Extracted and loaded data using Python code and PL/SQL packages
• Developed exhaustive SQL Queries to find differences in datasets to find out whether rolled
out software has fixed issues.
Company Name: CSC India Pvt Ltd. From: Mar 2010 -To: Jan 2012

Project Name UQA (Underwriter)


Client Name Zurich Insurance
Technology Selenium Web driver with Java| SQL | BDD Cucumber | Git | Github | Jira |
Duration 1.9 yrs
Role QA Automation Developer Team Size 5
Responsibilities:
• Understanding the functional and Non-functional requirements.
• Designed the BDD Frame work.
• Debugging and execution. Created the whole test framework using Selenium for further
test creation and execution.
• Regression Test cases were written and automated using Selenium Web Driver.
• Documented the entire build and deployment process including detailed step-by-step
instructions
• Supported multiple parallel projects by creating processes & procedures for reusing
existing code
• Developed exhaustive SQL Queries to find differences in datasets to find out whether rolled
out software has fixed issues.

Company Name: CSC India Pvt Ltd. From: Jul 2008 -To: Jan 2010

Project Name Assets Management System


Client Name Hnetworth Inc, NJ.
Technology Selenium Web driver with Java| SQL| BDD Cucumber | Git |Github | Jira |
Duration 1.6 years
Role QA Automation Developer Team Size 5
Responsibilities:
• Understanding the functional and Non-functional requirements.
• Designed the BDD Frame work.
• Debugging and execution. Created the whole test framework using Selenium for further
test creation and execution.
• Regression Test cases were written and automated using Selenium Web Driver.
• Documented the entire build and deployment process including detailed step-by-step
instructions
• Supported multiple parallel projects by creating processes & procedures for reusing
existing code
• Developed exhaustive SQL Queries to find differences in datasets to find out whether
rolled out software has fixed issues.

Education
MCA, Computers, 2006
B.Sc., Computers, 2000

Personal Details

Gender Male
Nationality Indian
Passport No P7373708
Mailing Address Hyderabad
Contact No. 8639815114

I hereby declare that the information that I have furnished is authentic, and true to the best of my
knowledge.

Madhusudan Reddy Bokku.

You might also like