A few days ago I came across the following question from one of my clients:
"As a part of our CI/CD process we use AWS AMI's to provision servers dynamically to run our tests. However we realize that some of those AMI's are larger than we need, causing unnecessary expenses and increasing the provisioning time. How can we reduce the existent AMI's sizes?"
Larger AMI's volumes sizes
Production AMI'S used in several production processes
Create new AMI's from scratch is not an option
Create an EC2 instance from the AMI (the target instance)
Create an EC2 Linux instance (the worker instance)
Stop the target...
At the end of this post we will get a Grafana dashboard with all the metrics related to our servers.
Prometheus is a tool for store time series data and manage alerts. It works as a pull based system, the Prometheus server fetches the metrics values from the target servers periodically. To expose the metrics in each server we will use "node_export" which basically expose the server metrics using the port 9100 (by default). And Grafana is a tool for query, visualize and understand your metrics. We will use it to create our dashboard.
Understand your Infrastructure
The first thing we need...
After receiving several questions about my previous post about Meeting the TFS Aggregator I decided to go deeper in this topic with a new post.
The idea is to expose some common use cases and their implementation for them that are getting started.
Before reading the post I highly recommend reading the official documentation to understand the syntax basics.
Let's implement the following use cases:
Update PBI state to "Committed" when any child gets moved to "In Progress"
Update PBI state to "Done" when all childrens get moved to "Done" or "Removed"
Set a "calculated" field in a Task
Create new work items and links
A couple of months ago I started to write a Powershell Module to use the TFS Rest API from Powershell scripts. You can find and download it from GitHub.
I personally use it for builds, releases and for integration/automation tools. The module helps me to create a much cleaner, shorter and easier scripts to interact with TFS/VSTS. Most of the functions works for TFS 2015.3 and above (including VSTS) but there are a couple of functions (lock git branch for example) that were introduced in later versions.
The module is strongly documented and each function has a detailed explanation about...
A few days ago I came across the following question: "how to execute git commands from a java application". After a little research I found a library called JGit which fulfilled my expectations. In this post I will explain how to use this library to execute the basic commands.
To import the library, add the following dependency to your pom file:
Then you will be able to import and use the JGit classes
To clone a repository you can use the following method:
Last Wednesday 09/01/18 I gave a talk about the new features in TFS 2018 in Microsoft Raanana as part of the ALM User Group.
In this talk we reviewed the new features and improvements introduced in TFS 2018, upgrade considerations and general Q&A.
Many thanks to all for coming and I hope you enjoyed it as much as I did. If you have any questions, you know how to contact me.
See you next time,
Note: This is the sixth part in a series of posts Getting Started with Google Container Builder
Add build notifications using Cloud Functions
In this part we will use Cloud functions to configure notifications for our builds (failed builds). In this example we will send the email notifications through Mailgun.
• Create a Google Storage bucket to store the function source code
gsutil mb gs://
• Create a folder to store the function files
• Create the functions files below:
Note: This is the fifth part in a series of posts Getting Started with Google Container Builder
Create a custom builder and add a build step to run the tests
In this part we will create a custom builder and use it to add a build step to test the application. For this purpose we will create a custom build step to run npm using the jasmine module.
• Open the Cloud Shell and create a folder to store the custom builder source code
• Create a Dockerfile to use the npm builder as the base image, install the...
Note: This is the fourth part in a series of posts Getting Started with Google Container Builder
Configure CD to Kubernetes
In this part we will configure the build to deploy the application to our Kubernetes cluster after each commit (CD).
• Add a new kubectl step in the cloudbuild.yaml file to replace the image used by kubernetes to run the application:
- name: 'gcr.io/cloud-builders/npm'
- name: 'gcr.io/cloud-builders/docker'
- name: 'gcr.io/cloud-builders/kubectl'
• Change the application message...
Note: This is the third part in a series of posts Getting Started with Google Container Builder
Configure Google Kubernetes Engine and deploy the application
In this part we will configure a simple Kubernetes cluster to use it to deploy the application.
Variables that will be used in this demo:
• Open the shell console and set the following variables:
gcloud config set project
gcloud config set compute/zone
gcloud config set compute/region
• Confirm that the variables were set
gcloud config list
• Create a kubernetes cluster
gcloud container clusters create --num-nodes 3