Randy Murrish


Summary

Senior Cloud Architect and DevOps professional with extensive experience in AWS and Kubernetes deployments. I specialize in taking new and existing applications and deploying them in the cloud using the unique capabilities found in AWS and Kubernetes with a specialty in securing the CI/CD pipeline. I build scalable, micro service based architectures that meet the opposing requirements of low cost and high availability.

Patents

  • 7,680,799 - Autonomic control of a distributed computing system in accordance with a hierarchical model
  • 7,788,544 - Autonomous system state tolerance adjustment for autonomous management systems
  • 9,854,037 - Identifying workload and sizing of buffers for the purpose of volume replication

Cloud Portfolio

Cassatt Corporation

My work at Cassatt Corporation in the mid-2000s was my first experience with cloud environments. Cassatt built a software-defined data center environment running on commodity hardware, an early version of what has become the standard for cloud environments.

I was responsible for the monitoring and management portion of the Cassatt product and for that work I earned the first two of my awarded patents. The system monitored deployed applications and responded to appropriate events to automatically scale workloads without intervention.

Quantum Corporation

I have been with Quantum 3 different times each dealing with the need to extend primary storage needs of customers in the Media and Entertainment industry in to the cloud. The primary product developed was Q-Cloud, a suite of products that resold provisioned cloud storage to Quantum StorNext Filesystem customers.

The Q-Cloud product was deployed to a multi-region virtual datac enter in AWS where it processed millions of storage requests from Quantum customers to S3, Glacier, and Azure Block Storage. The product provisioned storage on a dynamic basis and analyzed costs associated with the storage to identify issues as well as generate billing information for Quantum customers.

In my role at Architect, I designed the multi-region architecture and deployment strategy for the Q-Cloud data center including continuous integration and continuous deployment pipelines with Jenkins. In this role I used CloudFormation, CloudTrail, IAM, EC2, AMI, ELB, RDS, and other AWS technologies.

I was tasked throughout my time at Quantum to present designs and architectures to C-level staff.

Hitachi Data Systems

At HDS I was a Technical Evangelist and Senior Architect working with a new hyper-converged environment. It was based on VMware Vcenter and provided a similar software-defined data center environment to my work on Cassatt.

Bottles Waiting

Bottles Waiting is a bottle service startup based in Denver. I was tasked with architecting a client-server application deployed to AWS to satisfy investor requirements. The product had a classic three-tiered architecture with the RESTful web service deployed to a single-region data center. The application had IOS, Android, and Web based interfaces and the web based interface was deployed through the Amazon CloudFront CDN.

To date, the architecture and application have handled millions of requests for bottle service.

WOW

WOW is a cable company based in Denver. Here I was tasked with taking their legacy development architecture and transforming it to handle a CI/CD pipeline modeled in AWS CodePipeline. The application development team was responsible for breaking apart their monolithic application into micro services deployed in AWS. Docker containers were deployed in to Elastic Beanstalk as well as Docker clusters running on EC2 instances.

I also worked on security analysis of existing AWS product used by the business intelligence group, automating remediation of security group changes based on white lists, and analyzed AWS RedShift usage patterns to recommend performance improvements.

Yipee.io

At Yipee, I was Engineering Manager for the local Denver office and in charge of architecting and building a graphical user interface for Docker Compose and Kubernetes manifest files. Yipee was first deployed to a KOPS cluster in ECS. Yipee had early access to EKS products and eventually transitioned to deployment with EKS.

During my time at Yipee I attended a number of KubeCon, DockerCon, and ContainerWorld events to evangelize Yipee as well as give presentations about Yipee.

Building Yipee gave me a deep understanding of the moving parts of a Kubernetes cluster along Kubernetes in general. I continue to contribute to the open source version of Yipee.

Charter

At Charter I developed the SecDevOps plan to secure data contained within their HADOOP cluster running in AWS. The plan included resolution to personally identifiable information (PII) as well as compartmentalization of the data to prevent large data leaks.

HPE

HPE are transitioning their current supply chain applications to a hybrid cloud infrastructure based on Apache Mesos (DC/OS). I helped HPE develop the CI/CD pipeline from their legacy development teams to deployment of micro services to the DC/OS environment.

Bank of America

At Bank of America I was responsible for writing and implementing the OpenShift/Kubernetes Security strategy for the Banks internal and external Kubernetes clusters. Using the Center for Internet Security (CIS) Benchmarks, I secured existing and net-new deployments of OpenShift both on-prem with bare metal hardware and off-prem on AWS and Azure.

I addition, I was a DevSecOps Subject Matter Expert for the Bank and had implemented a comprehensive security process in the Banks standard DevOps pipeline including source code and artifact scanning as well as participating in a number of working groups to select appropriate products to implement a secure DevSecOps pipeline.