Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

Thursday, June 25, 2020

Observability for Testers : #Session 2

Exciting to be starting the second session of learning "Observability for Testers" 😎. We all were super excited as we had our instances on AWS ready from our previous session. Now that we have our instance it was ready for setting up the DIMA app on it which was one of the main goals for this session. 

The DIMA app is a web app which is built on a microservices architecture. This app allows to upload, display, manipulate and delete the images. This stack also includes the monitoring and observability tool like Kibana, Grafana, Prometheus and Honeycomb. Now that we know a little bit about the app let's start to understand what a microservices architecture is before we get deep dive into the DIMA app architecture. 


I had to give a short introduction about the microservices architecture to everyone. I picked up an easier analogy as an example to give an introduction to microservices architecture. This blogpost seemed to be very helpful that explains very well about introducing this term to someone completely new to this terminology. When I started to learn about microservices I read a lot of blogs by Martin Fowler. 

"Microservices architecture is an architectural style that structures an application as a collection of services. That are highly maintainable, testable, independently deployable and loosely coupled."
The above definition is from Let's consider an example of a university portal where they have different sections for undergraduate study, postgraduate study, International students, Jobs and courses which serves its own purposes. We can consider these different sections as a simple microservice that serves the business logic and functionality. When we think of building a new feature related to courses or jobs or even maybe for International students, it becomes easier to think of each service and build the functionality for the specific service. Of course, this definitely introduces complexity when we look for testing this as a single service and testing the integration of all these services. Because it doesn't matter whether its a monolith or a microservice or any other type of architecture, for the users it's a single application which they want to use it with ease. 
Few of the examples who use microservices are Netflix, Amazon and eBay. 

Image from
Image from caption

Now that we went through a basic understanding of microservices, here's how the structure of DIMA app looks like. Here are the images of architecture and infrastructure took from Abby's GitHub repo.


We can see here there are different services including the GUI and the database : 

  • GUI
  • MongoDB
  • Image Orchestrator
  • Image Holder
  • Image Thumbnail
  • Image flip
  • Image Grayscale
  • Image Size
  • Image Rotator

With all these different services, we need to find out where the problem is so we can figure out what the problem is. So having monitoring and observability tools in place will help anyone to debug the issue. 

After having a little exposure to the architecture and stack we followed the instructions to set up the DIMA  app stack on our instances so we can then trigger requests by adding/deleting/manipulating images and then exploring the logs and traces. 

It was really very helpful to have an understanding of the architecture of the app as it will be helpful while we are looking at the traces or logs and we could see the requests from different services. 
Super looking for the next session as we will get to explore more about logging, tracing and metrics.

Monday, February 24, 2020

Testing Tour Stop #10 : Pairing up on Observability with Abby Bangser

I had my 10th session already on my Testing Tour and it was with awesome Abby Bangser. I have a very special thing to share about what Abby has helped me with. Back in 2018 when I got interested in public speaking for the first time and dint knew how to start or from where. I came across  TechVoices (formerly Speakeasy) and straightaway approached them. I had to pick a mentor from the list, so I picked one and mentioned that I am a huge follower of Angie Jones and got influenced to get into public speaking after watching her talks. I didn't expect anything by saying that, I just wanted to share. Within a week I got an email from Abby that Angie is going to be my mentor and I was on top of the world. I can never thank enough to Abby for this help.

I was following all the tweets and information Abby would share about observability which slowly caught my interest to know what it is. I started to read about it from different articles and going through honeycomb resources. Have also been following Charity Majors tweets and blog. I reached out Abby and she agreed to join my Testing Tour journey, I was super happy Yayyy!!. 

Abby suggested if we could have a 15 mins prep call to plan our Testing Tour session which was an awesome idea. We had this prep call where I shared about what I'm interested to learn about and Abby gave me 2 options from which I picked to build the app which Abby used at Agile Testing Days Germany - 2019 workshop. Abby sent me all the details about what I had to prepare before our actual pairing session. I followed all the details and instructions and a lot of DM's where Abby was super super helpful. 

Getting Ready: 

1) Create a free account on Azure

The first step was to sign up and get an account on Azure so I can create my own computer/virtual machine -

2) Create a Virtual Machine in Azure

Now that I had my account, my next step was to create a virtual machine in the Azure cloud. The goal was to create a virtual mchine in Azure cloud that is big enough to run the application on it. The reason this is important is that the application runs both a lot of microservices (~8 services and a database) as well as all the infrastructure to make it observable (6 full applications and a bunch of tooling). Abby suggested the one that's mentioned in the document ubuntu computer with at least 16GiB of RAM.

First I created SSH key by using CloudShell Bash and saved it. Then I followed the instructions from this link which Abby shared. I added all the details required and selected "Standard D4s v3 (4 vcpus, 16 GiB memory)" .

3) Connect to the virtual machine

Now that I got my virtual machine created successfully, the next step was to start and connect it. When 'Start' button as shown in the above image, the virtual machine gets started. Then I clicked on 'Connect' so I got 3 different options - RDP, SSH, BASTION. I selected SSH as an option to connect and copied the ssh command and pasted in Bash terminal. I had a few challenges here while using ssh keys and using it to connect to VM. Finally, after a few challenges, I got it connected.

4) Installing and running the application

I followed this link for the instructions for installing and running the application. I just wanted to try following a couple of commands as part of pre-prep for the session and dint want to run all the commands as I wanted to do it during the session. First, I executed the command sudo -i. which means "login as sudo" so I can run as many commands as I can until I exit that login else I have to add "sudo" ahead of each and every command. I really like the way Abby explains things by making it so simple to understand. This was the first time I came across this command. I  need to have admin rights to the virtual machine I created in the cloud so I can install the application. I just tried one command which I had no clue what it's doing -

# install docker
apt-get update
apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
  And the rest I left for our session. 


Finally, the wait was over and it was showtime. This is exactly how I had my feelings towards this session. We started off our session with great planning and with a lot of enthusiasm. Abby already knew that I did run a few commands so the first thing she wanted me to do was to check what packages have I already installed. And the command Abby gave me for doing this was

apt list --installed
 This command gets you the list of installed packages. The output we got by running the above command was a huge list so Abby gave me another command to run.

apt list --installed | wc -l

This command is used to display all the files and folders present in the directory when it is piped with wc command with -l option it displays the count of all the files and folders present in current directory.  Wow, I was learning about commands and not just typing in and seeing the results. Abby was explaining what each command is doing which was amazing. While doing this Abby mentioned yet another new term that we are Yak Shaving. I was curious to find what this is and I found that -

Yak shaving is what you are doing when you're doing some stupid, fiddly little task that bears no obvious relationship to what you're supposed to be working on, but yet a chain of twelve causal relations links what you're doing to the original meta-task.

We were trying to run different commands and Abby was explaining me in detail so I think we were Yak Shaving which was good as I was able to understand each command.So next we wanted to search if transport-https was already installed by using the command:

apt list --installed | grep transport-https
Grep is a linux/unix command-line tool used to search for a string of characters in a specified file. The grep command is handy when searching through a massive log file. And if we want to run multiline command we need to '\'. Then we started to install docker which was part of instructions for Abby's application -
curl -fsSL | sudo apt-key add -
add-apt-repository \
   "deb [arch=amd64] \
   $(lsb_release -cs) \
apt-get update
apt-get install -y docker-ce docker-ce-cli
So here Abby explained to me that 'Curl' command says it's safe to download and tell apt that we need to have access to the repo. The way Abby was explaining about what each command is and what it does was so super helpful and interesting way to learn as I was not just copying and pasting the commands but was understanding the context of each command. The above command downloads the docker. What do you expect when two testers are trying to do this, of course, we wanted to confirm if the docker was installed so we checked it by running docker command. Now next step was to download docker compose -
curl -L "$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

So time again to test if it was installed by navigating to usr/local/bin and check if we can see docker-compose installed. We used the command -
 ls /usr/local/bin
Now next step was to create a userm, create a folder and pass the ssh key so we can clone the repo into that folder.
# create user
useradd -m -G docker -s /bin/bash olly
echo "olly ALL=(ALL:ALL) NOPASSWD:ALL" >> /etc/sudoers

# add ssh pub key
runuser -l olly -c 'mkdir /home/olly/.ssh'
runuser -l olly -c 'echo "" > /home/olly/.ssh/authorized_keys' 

# checkout repo
runuser -l olly -c 'git clone $HOME/observability-workshop'
Finally, we had our repository cloned and the next step was to run the stack by using the following command -
runuser -l olly -c '$HOME/observability-workshop/ 9'

It took quite a while to run the stack and it was already 11 pm for both of us as we both were located in London. We finally got the entire application up and running on my VM on azure clouds which was such an awesome feeling. I was able to access the frontend and all the other tools like - Grafana, Kibana, Prometheus and Zipkin. Now that I got this application running, I could use it and explore to try different levels of observability.


  • Getting out of comfort zone and built my own VM on azure clouds with all the help from Abby.
  • Learned new Linux commands and tried practically each of them.
  • Used all new tools for the first time but instead of fear of unknowns, I was thoroughly enjoying trying something new.
  • Planning and pairing is so powerful that so much could be achieved in just a couple of hours.
  • Testing mindset could be applied anywhere and everywhere like how we used while running different commands and testing them too. 
This is just a short summary of my session, but the actual session was even more fun and a lot more hands-on stuff that I did during the session. It was really an amazing session and I thoroughly enjoyed and in fact, I didn't stop my learnings from this session, I continued learning more different commands, playing around with VM, creating more new VM's (as Abby mentioned to me that its a great way to build some muscle memory) and using the application to learn more about using different logging and metrics tool. It's just the beginning for me and hoping to keep working on this and learn more on this topic.