All modeling projects start with data munging and followed by exploratory analysis. Good visualizations can reveal the resulting data quality and confirm the patterns analyzed. Next, insights into the data feed into the feature-engineering step and inform on the appropriate modeling strategies to attempt. A pipeline should be set up with grid-searching to find the best hyperparameters for modeling. This leads into modeling analysis to identify the best model to use in production. For demo purposes, a Flask interface can be built. Depending on the size of the dataset, the necessary compute can be configured on AWS using containers. All work is coded in Python using well-documented Jupiter notebooks. Completed projects include natural-language processing and recurrent artificial neural-networks. Links to my GitHub portfolio are available upon request.