Skip to content

Hazelcast is a robust in-memory data grid that enables distributed data processing and caching across a cluster of nodes. It offers high availability, fault tolerance, and low-latency data access, making it an excellent choice for applications that require real-time processing. Hazelcast supports various distributed data structures such as maps, queues, sets, and lists, making it versatile for different use cases.

In this blog post, we will walk you through deploying a Hazelcast cluster using Docker Compose and setting up Hazelcast Management Center (Mancenter) to monitor and manage your cluster.

Deploying Hazelcast with Docker Compose

Deploying Hazelcast using Docker Compose simplifies the process of setting up a distributed cluster. Below is the Docker Compose configuration that you can use to deploy Hazelcast and Management Center.

Step 1: Prepare the docker-compose.yml File

Here’s the docker-compose.yml file that you should use:

version: '3.8'

services:
  hazelcast:
    image: hazelcast/hazelcast:latest
    user: root
    container_name: hazelcast-node
    restart: unless-stopped
    environment:
      - JAVA_OPTS=-Dhazelcast.local.publicAddress=<YOUR_VM_IP>:5701
    ports:
      - "5701:5701"

  mancenter:
    image: hazelcast/management-center:latest
    container_name: hazelcast-mancenter
    restart: unless-stopped
    ports:
      - "8080:8080"
    environment:
      - HAZELCAST_CLUSTER_NAME=my-cluster
      - MC_INIT_CLUSTER=hazelcast-node:5701
    depends_on:
      - hazelcast

networks:
  default:
    driver: bridge

Replace <YOUR_VM_IP> with the actual IP address of your VM. This IP address should be the one assigned to your network interface.

Step 2: Deploy the Hazelcast Cluster and Management Center

To deploy the Hazelcast cluster along with Management Center, follow these steps:

  1. Create the docker-compose.yml file:
  • Save the above configuration into a file named docker-compose.yml in your working directory.
  1. Run the Docker Compose command:
  • Open your terminal, navigate to the directory where the docker-compose.yml file is located, and run the following command: docker-compose up -d This command will start both the Hazelcast node and the Management Center in detached mode. The -d flag ensures that the containers run in the background.
  1. Verify that the containers are running:
  • To check if both services are up and running, use the following command: docker ps You should see both hazelcast-node and hazelcast-mancenter containers listed.

Accessing Hazelcast Management Center

Once the containers are running, you can access Hazelcast Management Center to monitor and manage your cluster.

  1. Open your web browser and navigate to http://<YOUR_VM_IP>:8080. Replace <YOUR_VM_IP> with your actual VM IP address.
  2. Login to Management Center:
  • You will be greeted by the Management Center login screen. By default, Hazelcast Management Center does not require a username or password, so you can log in directly.
  1. Monitor Your Cluster:
  • After logging in, you will see the dashboard where you can monitor the health and performance of your Hazelcast cluster. You can view metrics, logs, and perform administrative tasks such as managing data structures.

By leveraging the power of Hazelcast and the simplicity of Docker Compose, you can quickly spin up a robust in-memory data grid that meets the demands of modern, data-intensive applications.

Monitoring and observability are crucial components of any modern application stack. Grafana and Prometheus are two of the most popular tools used in tandem to achieve this. Prometheus is a powerful monitoring and alerting system, while Grafana provides a flexible and beautiful way to visualize the data collected by Prometheus.

In this blog post, we will walk through the steps to deploy Grafana and Prometheus using Docker Compose, enabling you to quickly set up a monitoring stack for your applications.

Prerequisites

Before we start, make sure you have the following installed:

  1. Docker: Docker should be installed on your machine. You can follow the installation guide for your operating system here.
  2. Docker Compose: Docker Compose is a tool for defining and running multi-container Docker applications. Install it by following the guide here.

Step 1: Set Up a Project Directory

First, let's create a directory to store our Docker Compose configuration and related files. Open your terminal and run:mkdir grafana-prometheus cd grafana-prometheus

Step 2: Create the Docker Compose File

In the project directory, create a docker-compose.yml file. This file will define the services for Prometheus and Grafana.touch docker-compose.yml

Open the file in your favorite text editor and add the following content:version: '3.7' services: prometheus: image: prom/prometheus:latest container_name: prometheus volumes: - ./prometheus:/etc/prometheus/ command: - '--config.file=/etc/prometheus/prometheus.yml' ports: - "9090:9090" grafana: image: grafana/grafana:latest container_name: grafana ports: - "3000:3000" volumes: - grafana-storage:/var/lib/grafana depends_on: - prometheus volumes: grafana-storage:

This docker-compose.yml file defines two services:

  1. Prometheus:
  • Uses the prom/prometheus Docker image.
  • Exposes port 9090, which is the default port for Prometheus.
  • Mounts a volume for the Prometheus configuration file.
  1. Grafana:
  • Uses the grafana/grafana Docker image.
  • Exposes port 3000, which is the default port for Grafana.
  • Creates a named volume grafana-storage to persist Grafana data.

Step 3: Configure Prometheus

Now, let's configure Prometheus. Create a directory named prometheus inside your project directory:mkdir prometheus

Inside the prometheus directory, create a prometheus.yml file:touch prometheus/prometheus.yml

Add the following content to prometheus.yml:global: scrape_interval: 15s scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090']

This configuration tells Prometheus to scrape metrics from itself every 15 seconds.

Step 4: Start the Services

With everything configured, you can now start the Prometheus and Grafana services using Docker Compose. Run the following command in your terminal:docker-compose up -d

This command will download the necessary Docker images and start the containers in the background.

Step 5: Access the Prometheus and Grafana Dashboards

Once the services are up and running, you can access the dashboards in your web browser:

  • Prometheus: Go to http://localhost:9090
  • Grafana: Go to http://localhost:3000

Step 6: Add Prometheus as a Data Source in Grafana

To visualize Prometheus metrics in Grafana, you need to add Prometheus as a data source:

  1. Log in to Grafana using the default credentials (admin/admin). You will be prompted to change the password after the first login.
  2. Click on the gear icon (⚙️) on the left sidebar to access the Configuration menu.
  3. Click on "Data Sources" and then "Add data source."
  4. Select "Prometheus" from the list of available data sources.
  5. In the "URL" field, enter http://prometheus:9090 (this works because Docker Compose sets up networking between the containers).
  6. Click "Save & Test" to verify the connection.

Step 7: Create a Dashboard in Grafana

Now that Prometheus is set up as a data source in Grafana, you can create your first dashboard:

  1. Click on the "+" icon on the left sidebar and select "Dashboard."
  2. Click on "Add New Panel."
  3. In the "Metrics" tab, select "Prometheus" as the data source.
  4. Enter a Prometheus query in the "Query" field (e.g., up to see the status of monitored targets).
  5. Customize the visualization and click "Apply" to save the panel.

You can add multiple panels to a single dashboard, each representing different metrics from Prometheus.

Step 8: Persisting Data (Optional)

By default, the Grafana and Prometheus containers do not persist data across restarts. To make the data persistent, ensure that the volumes are correctly configured in your docker-compose.yml file. In the example provided, Grafana’s data is already persisted using the grafana-storage volume.

For Prometheus, you can add a volume to store the time-series data:prometheus: image: prom/prometheus:latest container_name: prometheus volumes: - ./prometheus:/etc/prometheus/ - prometheus-data:/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' ports: - "9090:9090" volumes: prometheus-data:

Conclusion

Deploying Grafana and Prometheus using Docker Compose is a straightforward way to set up a powerful monitoring and visualization stack. With just a few configuration files, you can have a fully functional environment that helps you monitor and analyze the performance of your applications.

This setup is perfect for development and testing environments. For production use, you might want to explore more advanced configurations, such as scaling, security, and high availability.

If you found this guide helpful, or if you have any questions or suggestions, feel free to leave a comment below. Happy monitoring!

In the fast-paced world of software development, efficiency, scalability, and consistency are crucial. Docker, an open-source platform, has become a game-changer in this domain by addressing these needs with a containerization approach. If you're new to Docker or looking to understand why it's such a powerful tool, this blog post is for you.

What is Docker?

Docker is a platform that allows developers to automate the deployment of applications inside lightweight, portable containers. These containers include everything the application needs to run: code, runtime, system tools, libraries, and settings. By encapsulating the application in a container, Docker ensures that it can run consistently across different computing environments.

Think of Docker containers as a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another. Whether you're running your application on a developer’s laptop, a testing server, or a production environment, Docker containers ensure that the environment remains consistent.

How Docker Works

Docker operates on the principle of containerization, which differs from traditional virtualization. Here's how it works:

  1. Docker Engine: At the heart of Docker is the Docker Engine, which is the runtime that builds and runs Docker containers. The engine operates using a client-server architecture, where the Docker client talks to the Docker daemon (server) to build, run, and manage containers.
  2. Containers vs. Virtual Machines (VMs): Unlike virtual machines, which include an entire operating system along with the application, Docker containers share the host system's OS kernel but operate in isolated environments. This makes containers much lighter and faster to start up compared to VMs.
  3. Docker Images: A Docker container is created from a Docker image. An image is a lightweight, standalone, and executable package that includes everything needed to run a piece of software. Images are typically built from a Dockerfile, a simple script that contains a set of instructions on how to build a particular image.
  4. Layered File System: Docker images use a layered file system, meaning that each image is built as a series of layers, with each layer representing a step in the Dockerfile. These layers are reusable, making images efficient in terms of storage.
  5. Networking: Docker containers can communicate with each other and with other services via Docker’s networking features, enabling complex application architectures.
  6. Orchestration: For managing large numbers of containers, Docker can work with orchestration tools like Kubernetes. These tools help automate the deployment, scaling, and management of containerized applications across clusters of machines.

Why You Should Use Docker

Now that you understand what Docker is and how it works, let’s dive into why you should consider using it:

  1. Portability: Docker containers can run on any machine that supports Docker, making it easy to move applications from one environment to another without worrying about compatibility issues. This is particularly useful in CI/CD pipelines where applications need to be tested and deployed across different environments.
  2. Consistency and Isolation: Each Docker container runs in its own isolated environment, ensuring that it won’t interfere with other applications or services on the same host. This isolation also means that the behavior of your application in development, testing, and production will remain consistent, eliminating the “it works on my machine” problem.
  3. Efficient Resource Utilization: Since Docker containers share the host OS kernel, they use fewer resources than traditional VMs. This allows you to run more containers on the same hardware, making better use of your infrastructure.
  4. Rapid Deployment: Docker enables faster software delivery by allowing you to quickly build, test, and deploy applications. Containers can be started and stopped in seconds, enabling rapid iteration and scaling.
  5. Version Control for Your Environment: With Docker, you can version control not just your code, but also your infrastructure. Each change in the environment (like updating a library or changing a configuration) can be tracked and managed just like code.
  6. Microservices Architecture: Docker is ideal for microservices, where applications are broken down into smaller, independently deployable services. Each microservice can run in its own container, making it easier to develop, test, and scale each part of your application.
  7. Community and Ecosystem: Docker has a large and active community, with a vast ecosystem of tools, extensions, and pre-built images available on Docker Hub. This makes it easier to get started and find solutions to common challenges.

Conclusion

Docker revolutionizes the way we build, ship, and run applications. By containerizing applications, Docker provides a consistent, portable, and efficient environment that streamlines the development process and enhances productivity. Whether you're a developer, a system administrator, or an IT professional, learning Docker can significantly improve your workflow and the performance of your applications.

If you haven’t started using Docker yet, now is the time to explore its potential and see how it can transform the way you work. Happy containerizing!