CIOReview
| | SEPTEMBER 20249CIOReviewbecome overwhelming to migrate an application directly into Kubernetes. What could make this even more difficult is a lot of the prevailing philosophy around architecting a Kubernetes is to make the cluster cloud agnostic. It's usually best to think of cloud platform design as an ongoing journey.Like many companies, FragranceNet's infrastructure resides within a Windows Domain environment, with SQL Server as a key technology. SQL server management has been done using different PowerShell scripts that are manually installed within a job server. There may not be enough logging around the PowerShell scripts since writing to the console from the task scheduler does not redirect standard output to the Windows Event log. There is also an issue with software deployment since the script might need to be modified and manually uploaded to the job server. The scriptwriter needs to be cognizant of how to handle job failures, so they might need to create a logging system or simply might create an exception statement that catches errors and sends emails to the script maintainer to be aware of an issue. This can get quickly convoluted, as some jobs might have expected failures but should only notify after failures have occurred several times consecutively. There is also the matter of whether notifications should be sent when the error occurs since some failures may or may not be urgent. There should be a difference between an entire database cluster coming offline and a secondary in a cluster being promoted to primary as a regular part of server maintenance. Determining these nuances takes time.A solution that was implemented to containerize these workloads uses several technologies within AWS and GitHub. The current code is stored in GitHub, so GitHub Actions was used as the pipeline solution in order to deploy the application directly into ECS (Elastic Container Service) as a container. GitHub Actions is integrated with AWS using OpenID Connect. When connecting different platforms together, be cognizant of the access given to the account for one platform to modify another platform. In this scenario, one should be mindful of situations where a local developer's machine could be compromised to deploy applications into production. The benefit of using a pipeline is that review requirements can be implemented, ensuring multiple supervisors sign off on deployments. Consider what level of access the automated user should have and what damage can be done if a worker's device is compromised. Speaking to the lack of logging above, the containers deployed in ECS can be configured to log console output from the container to CloudWatch Logs, which is a telemetry offering from AWS. Using CloudWatch Logs as a logging repository also gives the option to create metric filters. Specific keywords can be used as a trigger for a metric filter that can be used to create an alarm. Alarms can trigger SNS Topics (Simple Notification System). Topics can either trigger PagerDuty alerts, which would be critical alerts that need to be handled 24/7 or can send an email to a Slack Channel to remind staff that an outstanding issue should be looked at when the next employee starts their shift. There will be costs associated with using vendor specific offerings, and the journey needs to continue to improve the system.The previous example used AWS features and GitHub technologies, but there many substitutes for a lot of the systems mentioned. Google Compute has Google Kubernetes Engine, and Azure has Azure Kubernetes Service for container management. Google has Cloud Logging, and Azure has Monitor Logs. Cloudflare also has log aggregation and monitoring. For designing a pipeline and maintaining code, AWS, Google, and Azure have coding tools. Non-cloud code and pipeline tooling would also include Bitbucket, GitLab, and JetBrains. Splunk and Datadog also have options for log monitoring and alerting. The takeaway here is to determine what cloud environment your application is migrating to and potentially use a cloud provider's tools to cut time to market. There is always time later to make improvements. The end goal with cloud migration efforts should be to move application architecture to current day best practices
< Page 8 | Page 10 >