Spinnaker is an open-source and also a multi-cloud continuous delivery platform which helps us to release our software changes with high momentum and confidence.
Open sourced by Netflix and heavily contributed to by Google, it supports all the major cloud vendors (Azure, AWS, Openstack, App Engine etc.) and also including Kubernetes.
Let's have a look at the all the basic concepts in Spinnaker so as to create a continuous delivery pipeline using Kubernetes Engine, Cloud Source Repositories, Container Builder, Resource Manager, and Spinnaker. After being able to create a sample application, we will configure these services to automatically build, test, and deploy it. When the application code is being changed, the changes stimulates the continuous delivery pipeline to automatically rebuild, retest, and redeploy the newest version.
What Spinnaker Provides?
The two core characteristics of spinnaker are Application management and Application Deployment.
Spinnaker’s application management characteristics or features are used to view , manage and control our cloud resources.
Modern tech organizations operate collections of services — sometimes also referred to as “applications” or “microservices”. A Spinnaker application basically models this concept.
Applications, Clusters, and Server Groups are the key concepts which Spinnaker uses to describe all of the services. Load balancers and Firewalls describe how services are being exposed to users.
-An application in Spinnaker is a assembley of various clusters, which in turn are collections of many server groups. The application includes firewalls and load balancers. An application here which represents the service that needs to be deployed with the help of Spinnaker, all configuration for that service, and all the infrastructure on which it will later run. Normally, a different application is configured for each service, though Spinnaker does not enforce that at all.
-Clusters are basically logical groupings of Server Groups in Spinnaker.
-Note: Cluster, does not map to a Kubernetes cluster. It’s a collection of Server Groups, irrespective of any Kubernetes clusters which might be included in your underlying architecture.
-The base resource, the Server Group which is used to identify the deployable artifact (VM image, Docker image, source location) and basic configuration settings such as number of instances, metadata, autoscaling policies, etc. This resource is optionally associated with a Load Balancer and also a Firewall. When deployed, a Server Group is a collection of all the instances of the running software (VM instances, Kubernetes pods).
-A Load Balancer is associated with a port range and an ingress protocol. It is used to balances traffic among instances in its Server Groups. Also, we can enable health checks for a load balancer, with flexibility to define health criteria and also specify the health check endpoint.
-A Firewall defines to network traffic access. It is basically a set of firewall rules defined by an IP range (CIDR) along with the communication protocol (e.g., TCP) and port range.
-In Spinnaker the pipeline is the key deployment management construct . It consists of a sequence of actions which are known as stages. You can pass the parameters from stage to stage along with the pipeline.
-We can start a pipeline either manually, or we can also configure it to be automatically triggered by an event, such as a Jenkins job completing, a new Docker image appearing in your registry, a CRON schedule, or a stage in some another pipeline.
-We can configure the pipeline to emit notifications, by email, SMS to interested parties at various points during the pipeline execution (such as on pipeline start/complete/fail).
-In Spinnaker a stage is called an atomic component for a pipeline, describing an action that the pipeline will perform. We can sequence stages in a Pipeline in any order, though some stage sequences may be more common than the others. Spinnaker provides various number of stages such as Deploy, Resize, Disable, Manual Judgment, and many more.
-Spinnaker braces all the cloud native deployment strategies which also includes Red/Black (a.k.a Blue/Green), Rolling red/black and Canary deployments, etc.
What is Spinnaker Made Of?
Spinnaker is made up of varios number of independent microservices:
-Deck is a browser-based UI.
-Gate being the API gateway. The Spinnaker UI and all API callers communicate with Spinnaker through Gate.
-Orca is an orchestration engine. It handles all the ad-hoc operations and pipelines.
-Clouddriver is very much responsible for all the altering calls to the cloud providers and for indexing, caching all deployed resources.
-Front50 is used to hold the metadata of applications, pipelines, projects and notifications.
-Rosco is bakery. It is used in producing machine images (for example GCE images, AWS AMIs, Azure VM images). It currently enfolds Packer, but get expanded to support additional mechanisms for producing images.
-Igor is used to stimulate pipelines through continuous integration jobs in systems like Jenkins and Travis CI, and it allows Jenkins/Travis stages to be used in pipelines.
-Echo is the Spinnaker’s eventing bus. It supports sending notifications (e.g. Slack, email, Hipchat, SMS), and acts on incoming webhooks from services for example GitHub.
-Fiat is the Spinnaker’s authorization service. It helps to query a user’s access permissions for accounts, applications and service accounts.
-In Spinnaker Kayenta provides automated canary analysis.
-Halyard is Spinnaker’s configuration service which control the lifecycle of each of the above services. It basically interacts with these services during Spinnaker start-up, updates, and rollbacks.
By default, Spinnaker binds ports accordingly for all of the above specified microservices. For us, the UI (Deck) will be exposed onto 9000 Port.