Retrospectives That Work: A Guide to Reflect, Learn, and Improve

Introduction

Retrospectives are essential for continuous improvement in project management; they go beyond mere meetings. By reflecting on both successes and setbacks, teams can enhance collaboration, tackle challenges, and refine processes for future milestones. This blog post explores the key aspects of facilitating effective retrospectives, providing actionable techniques and real-world examples to improve your project outcomes.


1. The Value of Retrospectives

Retrospectives provide a structured opportunity for teams to reflect on a project's progress. They can take place not only at the end of a project, but also after key milestones to celebrate successes and identify areas for improvement.

Why Retrospectives Matter

  • Encourage Team Building: By sharing diverse perspectives, teams develop mutual respect and understanding.
  • Improve Collaboration: Honest discussions help uncover process inefficiencies and foster better coordination.
  • Promote Positive Change: Actionable feedback leads to refined procedures and heightened performance.

Example

During a software development project, a mid-point retrospective revealed that developers were struggling with unclear requirements. The team decided to implement a dedicated "Requirements Clarification" meeting before each sprint. This adjustment significantly reduced rework in the following sprints.


2. Encouraging Participation

A successful retrospective needs active participation from all team members. However, creating an environment where everyone feels comfortable to contribute can be challenging.

Techniques to Boost Engagement

  1. Create a Safe Environment:
    Start by implementing a "What’s said here stays here" policy. This ensures that participants feel the retrospective is a judgment-free zone.
    Example: A project manager at a marketing agency began a retrospective by stating, “This is a safe space. No stakeholders or clients will hear this conversation, so let’s be honest about what worked and what didn’t.”

  2. Model the Behavior:
    Lead by example. Share your own successes and challenges to set the tone.
    Example: If you made a scheduling mistake that delayed deliverables, admit it right away. This will encourage others to share their experiences too.

  3. Ask Structured Questions:
    Use prompts like "What should we start, stop, and continue?" to guide the discussion.
    Example: A logistics team used this approach to identify that regular inventory audits (“start”) and detailed dispatch schedules (“continue”) were working, but their ad-hoc meeting system needed to be replaced (“stop”).

  4. Review the Project Timeline:
    Walk through key project phases to jog memories and elicit meaningful insights.
    Example: A construction team discussed setbacks during the foundation stage that had ripple effects throughout the project.


3. Encouraging Accountability

Accountability is essential for retrospectives to drive real change. However, it’s important to distinguish accountability from blame.

Key Approaches

  1. Discuss Specific Challenges:
    Prepare a list of challenges beforehand to initiate focused discussions.
    Example: In a restaurant redesign project, the team addressed feedback from kitchen staff feeling excluded. This led to a commitment to involve all departments in future planning meetings.

  2. Turn Complaints Into SMART Action Items:
    Transform negative feedback into actionable steps that are Specific, Measurable, Achievable, Relevant, and Time-bound.
    Example: A team dissatisfied with late vendor deliveries proposed a SMART action item to schedule weekly vendor check-ins to preempt delays.

  3. Identify Team Contributions to Challenges:
    Encourage the team to reflect on their role in setbacks.
    Example: During a product launch, the marketing team realized they had overlooked key customer demographics due to incomplete research. This insight led to better planning in the next project.

  4. Keep Criticism Constructive:
    Focus on process improvements rather than individual mistakes.
    Example: Instead of blaming a designer for missed deadlines, a team discussed how better timeline estimates and resource allocation could prevent similar issues.


4. Addressing Negativity

Negativity can derail a retrospective if it is not addressed carefully. Creating a psychologically safe environment is essential for fostering constructive discussions.

Strategies to Mitigate Negativity

  1. Acknowledge Emotions:
    Recognize frustrations but steer the conversation toward solutions.
    Example: “I understand that the delayed software update caused stress. Let’s focus on how we can avoid this in the future.”

  2. Reframe Challenges:
    Turn negative experiences into opportunities for growth.
    Example: Rather than focusing on past marketing failures, a team shifted the conversation to explore new outreach strategies.

  3. Use Neutral Language:
    Avoid phrases that assign blame. For example, say “We missed an opportunity” rather than “You missed this.”
    Example: A project manager facilitated a discussion about missed milestones by saying, “Let’s discuss how our processes can adapt to unexpected hurdles.”

  4. Focus on the Big Picture:
    Emphasize collective learning and improvement.
    Example: During a healthcare system rollout, a retrospective highlighted resource allocation as a team-wide issue. The discussion led to implementing cross-functional resource planning.


Conclusion

Retrospectives are valuable tools for promoting continuous improvement, enhancing collaboration, and fostering accountability. By establishing a safe environment, encouraging participation, emphasizing accountability, and addressing negativity, project managers can turn retrospectives into crucial moments for team and project development.


Sponsor: Elevate your business with Arise Informatics Solutions. Empowering you with tailored strategies, cutting-edge technologies, and trusted partnerships to drive innovation and growth. Partner with Arise to shape a smarter tomorrow! Contact Arise today.

Mastering Time Estimation: A Guide to the Three-Point Estimating Technique

Introduction

Accurate time estimation is fundamental to effective project management. Whether you're launching a product or organizing a training program, understanding how long each task will take is crucial for creating realistic schedules and meeting deadlines. One effective method for this is the Three-Point Estimating Technique, which takes into account the best-case, most likely, and worst-case scenarios for each task.

In this blog, we will explore the three-point estimating process, its benefits, and provide practical examples to help you implement it effectively in your projects.


What is Three-Point Estimating?

Three-point estimating is a forecasting method that combines data from three scenarios: optimistic, most likely, and pessimistic. This approach helps project managers create a more realistic time estimate by assessing potential risks and uncertainties. It allows them to avoid both underestimating and overestimating project timelines.

  • Optimistic Estimate: This scenario assumes the best possible outcome, where everything goes according to plan.
  • Most Likely Estimate: This reflects a realistic expectation based on past experiences and typical circumstances.
  • Pessimistic Estimate: This considers the worst-case scenario, where multiple issues may arise.

The Process of Three-Point Estimating

Step 1: Break Down the Task
Start by defining the necessary steps to complete the task. For instance, designing a product page may involve creating wireframes, developing the layout, and integrating functionality.

Step 2: Gather Input from Experts
Engage with subject matter experts to understand the variables involved. Ask about past experiences, assumptions, and potential risks. Use these insights to categorize estimates into optimistic, most likely, and pessimistic scenarios.

Step 3: Define Conditions for Each Estimate
Document the assumptions underlying each estimate. For example, if you are training staff on a new tool:

  • Optimistic: Materials and equipment are delivered on time; training runs smoothly.
  • Most Likely: Minor delays or adjustments occur; some staff require additional sessions.
  • Pessimistic: Significant disruptions like missing equipment or rescheduling due to absences.

Step 4: Calculate the Final Estimate
Using the gathered data, calculate an average or weighted estimate to guide your project planning. A commonly used formula is:

Final Estimate=Optimistic+4(Most Likely)+Pessimistic6Final Estimate=6Optimistic+4(Most Likely)+Pessimistic


Practical Example: Sauce and Spoon Tablet Training Project

Task: Train staff to use tablets.

  • Optimistic:

    • Vendor is well-prepared.
    • Training materials and equipment arrive on time.
    • Training is completed as scheduled.
    • Estimated time: 4 hours.
  • Most Likely:

    • Minor delays in setup or delivery of materials.
    • Some staff require follow-up sessions.
    • Equipment malfunctions occur but are resolved quickly.
    • Estimated time: 6 hours.
  • Pessimistic:

    • Vendor cancels, requiring a replacement.
    • Equipment is delayed, requiring rescheduling.
    • Many staff are absent, prolonging the process.
    • Estimated time: 6 days.
  • Final Estimate: Using the formula:

Final Estimate=(4+4(6)+144)/6≈28.67 hours


Benefits of Three-Point Estimating

  1. Risk Mitigation: Helps identify and plan for potential delays and disruptions.
  2. Realistic Planning: Balances optimism with practical constraints for more accurate scheduling.
  3. Transparency: Provides stakeholders with a clear understanding of project timelines.
  4. Flexibility: Accounts for uncertainties without excessively padding the schedule.

Tips for Effective Three-Point Estimating

  • Encourage Collaboration: Involve team members with relevant expertise.
  • Review Historical Data: Use past projects to inform your estimates.
  • Document Assumptions: Clearly outline conditions for each estimate.
  • Revisit Estimates: Update predictions as new information becomes available.

Conclusion

The three-point estimating technique is an effective tool for project managers who want to balance optimism with realism. By breaking tasks down into different scenarios and consulting with experts, you can create a project timeline that is both well-informed and adaptable. 

 

The next time you plan a project, consider using three-point estimating to gain a clearer understanding of what to expect. Let this technique help you meet deadlines and achieve success with confidence.


Sponsor: Elevate your business with Arise Informatics Solutions. Empowering you with tailored strategies, cutting-edge technologies, and trusted partnerships to drive innovation and growth. Partner with Arise to shape a smarter tomorrow! Contact Arise today.

Backlog Refinement: Mastering Agile Effort Estimation for Scrum Success

Introduction

In Agile development, having a well-prioritized Product Backlog is crucial for smooth sprint execution. Backlog refinement and effort estimation allow teams to assess the effort needed to complete tasks, set realistic goals, and consistently deliver value. This blog will explore various Agile estimation techniques, accompanied by real-world examples, to help you apply these strategies effectively in your projects.

Objectives

By the end of this blog, you will:

  1. Understand the importance of backlog refinement in Agile.
  2. Learn about different effort estimation techniques such as T-shirt sizes, story points, Planning Poker™, and more.
  3. Explore real-world examples that demonstrate how these methods work in practice.
  4. Discover how effort estimation promotes team inclusivity, effort discovery, and better sprint planning.

Techniques for Agile Effort Estimation with Real-World Examples

1. T-Shirt Sizing

What It Is: This method categorizes tasks as Small (S), Medium (M), Large (L), or Extra Large (XL) based on effort and complexity.

Real-World Example:
In a project to develop a corporate website:

  • Small Task: Writing a blog post draft.
  • Medium Task: Designing a single-page template (e.g., a Contact Us page).
  • Large Task: Building the website’s navigation system.
  • Extra Large Task: Developing a custom CMS for managing the website.
    T-shirt sizing helps the team quickly group tasks and focus on detailed planning only for high-priority items.

2. Story Points

What It Is: Story points assign a numerical value to tasks based on effort, complexity, and risk, often using the Fibonacci sequence (1, 2, 3, 5, 8, 13…).

Real-World Example:
A mobile app development team:

  • 3 Points: Adding a “Forgot Password” feature – straightforward with minimal edge cases.
  • 5 Points: Implementing user notifications – medium complexity as it requires integration with a push notification service.
  • 13 Points: Enabling multi-language support – high complexity due to localization challenges and testing requirements.
    Story points allow the team to compare tasks and allocate effort accordingly without estimating exact hours.

3. Planning Poker™

What It Is: A team-based estimation activity where each member uses Fibonacci cards to estimate a task’s effort.

Real-World Example:
An e-commerce team planning a new feature for product reviews:

  1. The Product Owner describes the task: “Allow users to post and edit product reviews.”
  2. Team members privately assign effort (e.g., 3, 5, or 8 points) based on perceived complexity.
  3. Cards are revealed simultaneously.
  4. A developer who estimated 8 points explains that integration with a moderation API is challenging.
  5. After discussion, the team adjusts their estimates and agrees on 5 points.
    This process ensures balanced input from all team members and addresses hidden complexities.

4. Dot Voting

What It Is: Team members assign dots to backlog items based on effort or priority, providing a quick visual consensus.

Real-World Example:
In a backlog grooming session for a fintech app:

  • The team is presented with five features:
    1. Implementing a credit score calculator.
    2. Adding fingerprint authentication.
    3. Building a transaction history export feature.
    4. Enhancing the app’s dashboard UI.
    5. Developing a referral rewards program.
  • Each team member gets 5 dots to allocate.
  • Most dots go to the credit score calculator and authentication feature due to their high impact.
    Dot voting helps the team decide where to invest effort and resources first.

5. The Bucket System

What It Is: Tasks are placed into buckets that represent effort levels, ranging from low to high.

Real-World Example:
In a data migration project for an HRMS system:

  • Low Bucket (1-3 hours): Exporting existing employee records.
  • Medium Bucket (3-5 hours): Writing scripts to clean up legacy data.
  • High Bucket (5-8 hours): Designing and validating new data models.
  • Very High Bucket (8+ hours): Developing a full data migration pipeline.
    Using the bucket system, the team quickly estimates the relative effort for each task without getting bogged down in details.

6. Affinity Mapping

What It Is: This method groups backlog items by similarity, effort, or impact, making it easier to identify patterns.

Real-World Example:
In a social media platform development project:

  • Backlog items include:
    • Adding GIF support to posts.
    • Developing a “Save Post” feature.
    • Implementing live streaming capabilities.
    • Creating a trending topics algorithm.
  • The team groups items as follows:
    • Low Effort/High Impact: Save Post feature.
    • High Effort/High Impact: Live streaming and trending topics.
    • Low Effort/Low Impact: Adding GIF support.
      Affinity mapping helps prioritize impactful tasks while identifying potential quick wins.

Comparison Table: Agile Estimation Techniques

Technique When to Use Best Situations Advantages Disadvantages
T-Shirt Sizing When you need a quick, high-level estimation for backlog items. Early-stage planning or when detailed information is not yet available. - Simple and fast
- Great for initial prioritization
- Easy to explain to non-technical stakeholders.
- Less precise
- May oversimplify complex tasks.
Story Points When you need to compare effort across tasks and balance workload. Sprint planning and when estimating tasks with varying complexities. - Captures complexity, effort, and risk
- Encourages team collaboration
- Removes bias of time-based estimates.
- Needs calibration over time
- New teams may struggle with consistency.
Planning Poker™ When the team needs to discuss and estimate task effort collaboratively. Medium to large tasks requiring team input, or when addressing potential complexities or risks. - Promotes team consensus
- Identifies hidden risks early
- Fun and engaging for team members.
- Time-consuming for large backlogs
- Requires active participation from all team members.
Dot Voting When prioritization is required based on effort, priority, or impact. Grooming sessions with many backlog items or when consensus is needed quickly. - Quick and visual
- Helps focus on high-priority items
- Inclusive of all team perspectives.
- Not suitable for complex tasks
- Relies on subjective judgment rather than effort estimates.
Bucket System When tasks can be grouped into effort-based ranges for quicker categorization. Managing large backlogs, especially in mid-project phases with clear effort groupings. - Reduces estimation time for large backlogs
- Flexible grouping of tasks
- Encourages team input.
- May oversimplify granular tasks
- Requires initial team alignment on effort ranges.
Affinity Mapping When backlog items can be categorized by similarities, effort, or impact to facilitate easier prioritization. Refining large backlogs and identifying patterns among tasks. - Excellent for finding patterns
- Supports prioritization of high-impact tasks
- Visual and intuitive.
- Time-intensive
- Can be subjective if criteria aren’t clear.

How to Choose the Right Technique

  1. For Early-Stage Planning:
    Use T-Shirt Sizing or Dot Voting to prioritize tasks and focus on high-impact items quickly.

  2. For Sprint Planning:
    Use Story Points or Planning Poker™ to estimate tasks with varying complexities and risks accurately.

  3. For Large Backlogs:
    Use Bucket System or Affinity Mapping to group tasks and identify patterns quickly.

  4. When Collaboration is Key:
    Use Planning Poker™ to involve all team members and reach a consensus.

  5. When Time is Limited:
    Use Dot Voting for quick prioritization of effort or impact.

By understanding the strengths and ideal circumstances for each method, Agile teams can select the most suitable estimation technique to improve both efficiency and accuracy.

Conclusion

Agile estimation techniques like T-shirt sizing, story points, and Planning Poker™ are invaluable tools for managing a Product Backlog. Each method offers unique benefits and can be tailored to your team’s needs. By integrating these techniques and applying them consistently, teams can improve predictability, foster collaboration, and deliver value effectively.

Refine your backlog using these methods, and let your Agile team thrive!


Sponsor: Elevate your business with Arise Informatics Solutions. Empowering you with tailored strategies, cutting-edge technologies, and trusted partnerships to drive innovation and growth. Partner with Arise to shape a brighter tomorrow! Contact Arise today.

From Stories to Epics: A Guide to Building a Customer-Centric Product Backlog

Introduction

In Agile development, creating a Product Backlog isn’t just about listing tasks; it’s about crafting a roadmap that aligns with the user’s needs and the Product Owner’s vision. This roadmap is made up of user stories and epics—two essential tools that ensure every feature delivers value and keeps the user at the heart of the process.

This blog explores the importance of user stories and epics, how to write them effectively, and their role in creating a seamless user experience. Whether you’re a seasoned Scrum practitioner or new to Agile, understanding these concepts is key to building a high-performing Backlog.

Objective

By the end of this blog, you’ll understand:

  • What are user stories and epics, and how do they differ?
  • The essential elements of a user story, including personas and the I.N.V.E.S.T. framework.
  • How epics organize related user stories for better Backlog management.
  • The role of acceptance criteria in defining done for user stories.

What Are User Stories?

User stories are brief, user-centred descriptions of a feature or requirement. They emphasize the user’s perspective, ensuring the team keeps the user’s goals and experiences at the forefront. A typical user story follows this format:
As a <user role>, I want this <action> so that I can get this <value>.

For example:
As an avid reader, I want to read reviews before checking out a book to know I’ll enjoy my selection.

Elements of a User Story

When writing user stories, consider the following components:

  1. User Persona: Define your user and their relationship to the product.
  2. Definition of Done: Outline what must be completed for the story to be considered finished.
  3. Tasks: Identify key activities required to implement the story.
  4. Feedback: Incorporate past feedback to refine features.

The I.N.V.E.S.T. Framework

Effective user stories adhere to the I.N.V.E.S.T. criteria:

  • Independent: Can be completed without relying on other stories.
  • Negotiable: Flexible enough to discuss and refine.
  • Valuable: Provides clear value to the user or business.
  • Estimable: Easily broken into tasks and estimated.
  • Small: Fits within a single Sprint.
  • Testable: Meets predefined acceptance criteria.

What Are Epics?

An epic is a collection of related user stories representing a large body of work. Think of user stories as individual chapters, while an epic is the entire book. For instance:

  • Epic: Website Creation
    • User Story 1: Customers can read book reviews online.
    • User Story 2: Customers can add books to their cart for borrowing.

Epics structure the Backlog, allowing teams to manage high-level ideas without diving into excessive detail upfront.

Writing Epics and Stories

Let’s say you’re creating a website for a library. Your epic might be “Website Creation.” Under this epic, individual user stories could include:

  1. As a user, I want to read reviews before borrowing books to choose what I like.
  2. As a user, I want to see recommendations based on my reading history to discover new books.

For the physical library space, another epic like “Organization of Physical Space” might include:

  1. As a visitor, I want clear signage to find the non-fiction section easily.

Acceptance Criteria for User Stories

Every user story must meet its acceptance criteria to be considered complete. For example, for a library website:

  • Users can browse reviews of at least 10 books.
  • Users can filter books by genre or rating.
  • Reviews include a verified purchase badge for authenticity.

Conclusion

User stories and epics are essential tools for creating a customer-centric Product Backlog. Focusing on user needs ensures that every feature delivers value and aligns with the product vision. The structured approach provided by the I.N.V.E.S.T. framework and the organization offered by epics enables teams to prioritize, collaborate, and execute effectively.

Whether writing a single-user story or planning an epic, remember that every detail you define today helps your team build better products tomorrow. With these principles in mind, you’re ready to create Backlogs that guide development and delight your users.


Sponsor: Elevate your business with Arise Informatics Solutions. Empowering you with tailored strategies, cutting-edge technologies, and trusted partnerships to drive innovation and growth. Partner with Arise to shape a smarter tomorrow! Contact Arise today.

Git Remote Management: Add, Rename, Change, and Remove Like a Pro

Introduction

Git is an essential version control system for developers. One of its most powerful features is its ability to work with remote repositories, allowing teams to collaborate seamlessly across geographies. Remote repositories, typically hosted on platforms like GitHub, provide a centralized location to push, pull, and share code.

In this article, we will dive into remote repository management in Git. Practical examples teach you how to add, rename, change, and remove remote repositories. By the end, you’ll have the knowledge and confidence to manage remotes like a pro.

Objective

The goal of this blog post is to guide beginner developers and software engineers through the process of managing remote repositories in Git. Specifically, you’ll learn to:

  • Add a new remote repository to your local Git project.
  • Rename existing remotes for better organization.
  • Change the URL of a remote to update connection details.
  • Remove remotes that are no longer in use.

Whether you’re working on a personal project or contributing to a team on GitHub, understanding these Git commands will significantly improve your workflow.


1. Adding a Remote Repository

A remote repository is a version of your project hosted on an external server, such as GitHub. You need to link your local repository to this remote in order to synchronize changes.

Command: git remote add

To add a new remote to your Git repository, use the following syntax:

git remote add <remote_name> <remote_url>
Example:

Let’s say you want to add a new remote named origin for your GitHub repository:

To verify that the remote has been added successfully, use:

git remote -v

Output:

Troubleshooting: “Remote origin already exists”

If you encounter the error:

fatal: remote origin already exists.

It means that a remote with the name origin has already been added. To resolve this:

  • Rename the existing remote (explained in the next section), or
  • Use a different remote name.

2. Renaming a Remote Repository

You might want to rename a remote for better clarity or organization, especially when you work with multiple remotes.

Command: git remote rename

To rename an existing remote, use:

git remote rename <old_name> <new_name>
  • <old_name>: The current name of the remote (e.g., origin).
  • <new_name>: The new name for the remote (e.g., upstream).
Example:

Let’s rename a remote called origin to upstream:

git remote rename origin upstream

Verify the change using:

git remote -v

Output:

Troubleshooting: “Remote [old_name] does not exist”

If the old remote name is incorrect or does not exist, you’ll get this error:

fatal: Could not rename config section 'remote.[old_name]' to 'remote.[new_name]'

Ensure the correct remote name by listing existing remotes:

git remote -v

3. Changing a Remote Repository’s URL

There are times when you need to change the URL of a remote, such as switching from HTTPS to SSH for authentication or moving the repository to a new location.

Command: git remote set-url

To update a remote URL, use:

git remote set-url <remote_name> <new_url>
  • <remote_name>: The name of the remote (e.g., origin).
  • <new_url>: The new URL for the remote repository.
Example:

Let’s update the origin remote to switch from HTTPS to SSH:

git remote set-url origin git@github.com:yourusername/your-repo.git

Verify the change:

git remote -v

Output:

origin  git@github.com:yourusername/your-repo.git (fetch)
origin  git@github.com:yourusername/your-repo.git (push)
Troubleshooting: “No such remote ‘[name]’”

If the specified remote does not exist, you’ll encounter:

fatal: No such remote '[name]'

Double-check the name of the remote with:

git remote -v

4. Removing a Remote Repository

You may need to remove a remote when it’s irrelevant or the repository has been moved elsewhere.

Command: git remote rm

To remove a remote, use:

git remote rm <remote_name>
  • <remote_name>: The name of the remote you want to remove.
Example:

Let’s remove a remote named upstream:

git remote rm upstream

Verify that it has been removed:

git remote -v

Output:

Troubleshooting: “Could not remove config section ‘remote.[name]’”

This error means the remote you tried to remove does not exist:

error: Could not remove config section 'remote.[name]'

Double-check the remote’s existence by listing all remotes:

git remote -v

Conclusion

Mastering remote repository management in Git is a critical skill for any developer. Learning how to add, rename, change, and remove remotes ensures that your workflow stays organized, flexible, and efficient. Whether you’re working solo or collaborating with a team, these commands will help you easily handle repository remotes.

With this knowledge, you can push, pull, and clone repositories like a pro!

Real-Time Speech Translation with Azure: A Quick Guide

Introduction

The Azure Speech Translation service enables real-time, multi-language speech-to-speech and speech-to-text translation of audio streams. In this article, you will learn how to run an application to translate speech from one language to text in another language using Azure’s powerful tools.

Objective

By the end of this article, you will be able to create and deploy an application that translates speech from one language to text in another language.

Step 1: Creating a New Azure Cognitive Services Resource Using Azure Portal

Task 1: Create Azure Cognitive Speech Service Resource

  1. Open a tab in your browser and go to the Speech Services page. If prompted, sign in with your Azure credentials.

  2. On the Create page, provide the following information and click on Review + create:

    • Subscription: Select your subscription (this will be selected by default).
    • Resource group: Create a new group named azcogntv-rg1. Click on OK.
    • Region: East US
    • Name: CognitiveSpeechServicesResource
    • Pricing tier: Free F0

    Create Speech Service

    Create Speech Service

  3. Once the validation passes, click on Create.

    Validation Passed

  4. Wait for the deployment to complete, then click on Go to resource.

    Deployment Complete

  5. Click on Keys and Endpoint from the left navigation menu. Copy and save Key 1 and Endpoint values in a notepad for later use.

    Keys and Endpoint

Task 2: Create Azure Cognitive Language Service Resource

  1. Open a new browser tab and go to the Language Services page. Sign in with your Azure credentials.

  2. Without selecting any option on the page, click on Continue to create your resource.

    Continue to Create Resource

  3. Update with the following details and then click on Review + Create:

    • Subscription: Your Azure subscription
    • Resource Group: Select azcogntv-rg1
    • Region: East US
    • Name: CognitivelanguageResourceXX (Replace XX with any random number)
    • Pricing tier: Free (F0)
    • Select checkbox: By checking this box, I certify that I have reviewed and acknowledged the Responsible AI Notice terms.

    Create Language Service Create Language Service

  4. Review the resource details and then click on Create.

    Review and Create

  5. Wait for the deployment to complete, and once successful, click on Go to resource group.

    Deployment Successful

  6. Click on CognitiveLanguageResource.

    Cognitive Language Resource

  7. Click on Keys and Endpoints > Show keys. Copy Key 1 and endpoint values and save them in a notepad for later use.

    Keys and Endpoints

Step 2: Recognizing and Translating Speech to Text

Task 1: Set Environment Variables

Your application must be authenticated to access Cognitive Services resources. Use environment variables to store your credentials securely.

  1. Open Command Prompt and run mkdir Speech-to-Text to create a directory. Then run cd Speech-to-Text to navigate into it.

    mkdir Speech-to-Text
    cd Speech-to-Text
    
  2. To set the SPEECH_KEY environment variable, replace your-key with one of the keys for your resource saved earlier.

    setx SPEECH_KEY your-key
    setx SPEECH_REGION eastus
    

    Set Environment Variables

  3. After adding the environment variables, restart any running programs that need to read the environment variable, including the console window. Close the Command Prompt and open it again.

Task 2: Translate Speech from a Microphone

  1. Open Command Prompt, navigate to your directory (cd Speech-to-Text), and create a console application with the .NET CLI.

    dotnet new console
    

    Create Console App

  2. Install the Speech SDK in your new project with the .NET CLI.

    dotnet add package Microsoft.CognitiveServices.Speech
    

    Install Speech SDK

  3. Open the Program.cs file in Notepad from the Speech-to-Text project folder. Replace the contents of Program.cs with the following code:

    using System;
    using System.IO;
    using System.Threading.Tasks;
    using Microsoft.CognitiveServices.Speech;
    using Microsoft.CognitiveServices.Speech.Audio;
    using Microsoft.CognitiveServices.Speech.Translation;
    
    class Program
    {
        // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
        static string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY");
        static string speechRegion = Environment.GetEnvironmentVariable("SPEECH_REGION");
    
        static void OutputSpeechRecognitionResult(TranslationRecognitionResult translationRecognitionResult)
        {
            switch (translationRecognitionResult.Reason)
            {
                case ResultReason.TranslatedSpeech:
                    Console.WriteLine($"RECOGNIZED: Text={translationRecognitionResult.Text}");
                    foreach (var element in translationRecognitionResult.Translations)
                    {
                        Console.WriteLine($"TRANSLATED into '{element.Key}': {element.Value}");
                    }
                    break;
                case ResultReason.NoMatch:
                    Console.WriteLine($"NOMATCH: Speech could not be recognized.");
                    break;
                case ResultReason.Canceled:
                    var cancellation = CancellationDetails.FromResult(translationRecognitionResult);
                    Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");
    
                    if (cancellation.Reason == CancellationReason.Error)
                    {
                        Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
                        Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");
                        Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
                    }
                    break;
            }
        }
    
        async static Task Main(string[] args)
        {
            var speechTranslationConfig = SpeechTranslationConfig.FromSubscription(speechKey, speechRegion);
            speechTranslationConfig.SpeechRecognitionLanguage = "en-US";
            speechTranslationConfig.AddTargetLanguage("it");
    
            using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
            using var translationRecognizer = new TranslationRecognizer(speechTranslationConfig, audioConfig);
    
            Console.WriteLine("Speak into your microphone.");
            var translationRecognitionResult = await translationRecognizer.RecognizeOnceAsync();
            OutputSpeechRecognitionResult(translationRecognitionResult);
        }
    }
    
  4. Run your new console application to start speech recognition from a microphone:

    dotnet run
    
  5. Speak into your microphone when prompted. What you speak should be output as translated text in the target language:

    Speak this: The Speech service provides speech-to-text and text-to-speech capabilities with an Azure Speech resource and then press Enter.
    

    Speak Into Microphone

Conclusion

In this article, you translated speech from a microphone to a different language by updating the code in the Program.cs file. This powerful feature of Azure Cognitive Services allows for seamless and real-time translation, making it a valuable tool for various applications and industries.

Source Code

Quick Start: Create and Deploy C# Functions in Azure Using CLI

Introduction

This blog post will guide you through creating and deploying a C# function to Azure using command-line tools. This article will walk you through creating an HTTP-triggered function that runs on .NET 8 in an isolated worker process. By the end of this post, you will have a functional Azure Function that responds to HTTP requests.

Objective

In this article, you will:

  1. Install the Azure Functions Core Tools.
  2. Create a C# function that responds to HTTP requests.
  3. Test the function locally.
  4. Deploy the function to Azure.
  5. Access the function in Azure.

Prerequisites

Ensure you have the following installed:

  • Azure CLI (version 2.4 or later)
  • .NET SDK (version 6.0 and 8.0)
  • Azure Functions Core Tools (version 4.x)

Step 0: Install Azure Functions Core Tools

  1. Uninstall previous versions (if any):
    • Open the Settings from the Start menu.
    • Select Apps.
    • Click on Installed apps.
    • Find Azure Function Core Tools, click the three dots next to it, and select Uninstall.
  2. Install the latest version:
    • Navigate to the Azure Function Core Tools downloads page: Install the Azure Functions Core Tools.
    • Download the appropriate version of Azure Functions Core Tools for your operating system. (Recommended. Visual Studio Code debugging requires 64-bit.) 
    • Follow the prompts: Click Next, accept the agreement, and click Install.

    • Click Finish once the installation is complete.

Step 1: Prerequisite Check

  1. Open Command Prompt and execute the following commands to verify your setup:
    • func --version – This is to check that the Azure Functions Core Tools are version 4.x.
    • dotnet --list-sdks – This checks that the required versions are installed. It should be 6.0 and 8.0
    • az --version to check that the Azure CLI version is 2.4 or later.

  2. az login to sign in to Azure and verify an active subscription. Select your login in the browser that opens up. Log in to Azure when prompted:
    • A browser window will open. You can just select your Azure account to sign in.
    • The command prompt will display your Azure login details.

Step 2: Create a Local Function Project

  1. Initialize the function project: Run the func init command, as follows, to create a functions project in a folder named LocalFunctionProj with the specified runtime:
    func init LocalFunctionProj --worker-runtime dotnet-isolated --target-framework net8.0
    cd LocalFunctionProj
    

    This folder contains various files for the project, including configuration files named local.settings.json and host.json. Because local.settings.json can contain secrets downloaded from Azure, the .gitignore file excludes it from source control by default.

  2. Add a new HTTP-triggered function:
    func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous"
    

  3. Examine the generated code:
     using Microsoft.Azure.Functions.Worker;
     using Microsoft.Extensions.Logging;
     using Microsoft.AspNetCore.Http;
     using Microsoft.AspNetCore.Mvc;
        
     namespace LocalFunctionProj
     {
         public class HttpExample
         {
             private readonly ILogger<HttpExample> _logger;
        
             public HttpExample(ILogger<HttpExample> logger)
             {
                 _logger = logger;
             }
        
             [Function("HttpExample")]
             public IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req)
             {
                 _logger.LogInformation("C# HTTP trigger function processed a request.");
                 return new OkObjectResult("Welcome to Azure Functions!");
             }
         }
     }
    

Step 3: Run the Function Locally

  1. Start the local Azure Functions runtime host: Run your function by starting the local Azure Functions runtime host from the LocalFunctionProj folder:
    func start
    

    The output says that the worker process started and initialized. The function’s URL is also displayed.

  2. Test the function:
    • Copy the URL from the output and paste it into a browser.
    • You should see a message: “Welcome to Azure Functions!”

  3. Stop the function host:
    • Press Ctrl+C and confirm with y.

Step 4: Create Supporting Azure Resources

Before you can deploy your function code to Azure, you need to create three resources:

  • A resource group, which is a logical container for related resources.

  • A Storage account, which is used to maintain state and other information about your functions.

  • A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.

You can use the following commands to create these items. Both Azure CLI and PowerShell are supported.

  1. Sign in to Azure:

    If you haven’t done so, sign in to Azure: The az login command signs you into your Azure account.

     az login
    
  2. Create a resource group:

    Create a resource group named RGForFunctionApp in your chosen region:

    az group create --name RGForFunctionApp --location eastus
    

    The az group create command creates a resource group and populates the command shell with the created RG details with the Provisioning state - Succeeded

  3. Create a storage account:

    Create a general-purpose storage account in your resource group and region.

    az storage account create --name storaccforazfunc07 --location eastus --resource-group RGForFunctionApp --sku Standard_LRS --allow-blob-public-access false
    

    The az storage account create command creates a storage account named storaccforazfunc07 in the EastUS region. The details are populated in the command prompt, and the provisioning state is succeeded.

     

  4. Create the function app:

    Create the function app in Azure: Execute the below command:

    az functionapp create --resource-group RGForFunctionApp --consumption-plan-location eastus --runtime dotnet-isolated --functions-version 4 --name appforfunc07 --storage-account storaccforazfunc07
    

    The az functionapp create command creates the function app in Azure.

    • storaccforazfunc07 is the storage account that we created in the previous step.

    • appforfunc07 is the name of the app that we create here. It needs to be globally unique.

Step 5: Deploy the Function Project to Azure

After successfully creating your function app in Azure, you’re ready to deploy your local functions project using the func azure functionapp publish command.

  1. Deploy the function:
    func azure functionapp publish appforfunc07
    

    After deployment, a URL will be provided. This is the Invoke URL for your function.

Step 6: Invoke the Function on Azure

  1. Invoke the function using a browser:
    • Copy the Invoke URL and paste it into a browser.
    • You should see the same “Welcome to Azure Functions!” message.

  2. View real-time logs:

    Call the remote function again in a separate terminal window or in the browser. The terminal shows a verbose log of the function’s execution in Azure.

    func azure functionapp logstream appforfunc07
    
    • Open another terminal or browser window and call the function URL again to see the real-time logs.
    • Press Ctrl+C to end the logstream session.

Step 7: Clean Up Resources

  1. Delete the resource group:

    Execute the following command to delete the resource group and all its contained resources. Confirm if you want to perform this operation and hit Enter.

    az group delete --name RGForFunctionApp
    
Conclusion

Congratulations! Using command-line tools, you’ve successfully created, tested, and deployed a C# function to Azure. This step-by-step guide has walked you through installing necessary tools, setting up a local development environment, creating and running a function locally, deploying it to Azure, and finally cleaning up the resources. Azure Functions provides a robust, serverless compute environment to build and quickly deploy scalable, event-driven applications. Happy coding!

 

Unleashing the Power of Azure AI: A Comprehensive Guide to Text-to-Speech Applications

Introduction:

In the dynamic landscape of application development, efficiency and cost-effectiveness are paramount. For developers working on projects that involve video narration, traditional methods of hiring vocal talent and managing studio resources can be both cumbersome and expensive. Enter Microsoft’s Azure AI services, offering a suite of APIs that empower developers to integrate cutting-edge text-to-speech capabilities into their applications. In this comprehensive guide, we’ll delve into the intricacies of Azure AI, providing step-by-step instructions and code snippets to help you harness its full potential for creating text-to-speech applications.

Objective:

  • Establish an Azure AI services account
  • Develop a command-line application for text-to-speech conversion using plain text
  • Provide detailed insights and code snippets for each stage of the process

Creating a Text-to-Speech Application using a Text File

Step 1: Creating an Azure AI Services Account

  1. Begin by navigating to the Azure portal and signing in with your credentials.
  2. Once logged in, locate the Azure AI services section and proceed to create a new account.

  3. In the Create Azure AI window, under the Basics tab, enter the following details and click on the Review+create button.

  4. In the Review+submit tab, once the Validation is Passed, click on the Create button.

  5. Wait for the deployment to complete. The deployment will take around 2-3 minutes.
  6. After the deployment is completed, click on the Go to resource button.

  7. In your AzureAI-text-speechXX window, navigate to the Resource Management section and click on Keys and Endpoints.

  8. Configure the account settings according to your requirements.In the Keys and Endpoints page, copy KEY1, Region, and Endpoint values and paste them into a notepad as shown in the below image, then Save the notepad for later use.

Step 2: Create your text to speech application

  1. In the Azure portal, click on the [>_] (Cloud Shell) button at the top of the page to the right of the search box. A Cloud Shell pane will open at the bottom of the portal. The first time you open the Cloud Shell, you may be prompted to choose the type of shell you want to use (Bash or PowerShell). Select Bash. If you don’t see this option, then you can go ahead and skip this step.

    6wavjic5.jpg

  2. In You have no storage mounted dialog box, click on the Create storage.

    i8pikt8d.jpg

  3. Ensure the type of shell indicated on the top left of the Cloud Shell pane is switched to Bash. If it’s PowerShell, switch to Bash by using the drop-down menu.

    qbb1qkgf.jpg

  4. In the Cloud Shell on the right, create a directory for your application, then switch folders to your new folder. Enter the following command

    mkdir text-to-speech
    cd text-to-speech
    

    s1xdm90b.jpg

  5. Enter the following command to create a new .NET Core application. This command should take a few seconds to complete.

    dotnet new console

    lfvq8jtr.jpg

  6. When your .NET Core application has been created, add the Speech SDK package to your application. This command should take a few seconds to complete.

    dotnet add package Microsoft.CognitiveServices.Speech

    0wp2tdec.jpg

Step 3:Add the code for your text to speech application

  1. In the Cloud Shell on the right, open the Program.cs file using the following command.

        code Program.cs
    
  2. Replace the existing code with the following using statements, which enable the Azure AI Speech APIs for your application:

    using System.Text;
    using Microsoft.CognitiveServices.Speech;
    using Microsoft.CognitiveServices.Speech.Audio;
    

    uo6fzs9i.jpg

  3. Below the using statements, add the following code, which uses Azure AI Speech APIs to convert the contents of the text file you’ll create into a WAV file with the synthesized voice. Replace the azureKey and azureLocation values with the ones you copied in the last task 1.

    string azureKey = "ENTER YOUR KEY FROM THE FIRST EXERCISE";
    string azureLocation = "ENTER YOUR LOCATION FROM THE FIRST EXERCISE";
    string textFile = "Shakespeare.txt";
    string waveFile = "Shakespeare.wav";
        
    try
    {
        FileInfo fileInfo = new FileInfo(textFile);
        if (fileInfo.Exists)
        {
            string textContent = File.ReadAllText(fileInfo.FullName);
            var speechConfig = SpeechConfig.FromSubscription(azureKey, azureLocation);
            using var speechSynthesizer = new SpeechSynthesizer(speechConfig, null);
            var speechResult = await speechSynthesizer.SpeakTextAsync(textContent);
            using var audioDataStream = AudioDataStream.FromResult(speechResult);
            await audioDataStream.SaveToWaveFileAsync(waveFile);       
        }
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
        
    }
    

    nq1qs7oa.jpg

  4. This code uses your key and location to initialize a connection to Azure AI services, then reads the contents of the text file you\‘ll create, then uses the SpeakTextAsync() method of the speech synthesizer to convert the text to audio, then uses an audio stream to save the results to an audio file.
  5. To save your changes, press Ctrl+S to save the file, and then press Ctrl+Q to exit the editor

Step 4: Create a text file for your application to read

  1. In the Cloud Shell on the right, create a new text file that your application will read:

    code Shakespeare.txt

  2. When the code editor appears, enter the following text.

    The following quotes are from act 2, scene 7, of William Shakespeare's play "As You Like It."
        
    Thou seest we are not all alone unhappy:
    This wide and universal theatre
    Presents more woeful pageants than the scene
    Wherein we play in.
        
    All the world's a stage,
    And all the men and women merely players:
    They have their exits and their entrances;
    And one man in his time plays many parts,
    His acts being seven ages.
    

    dbjulb6c.jpg

  3. To save your changes, press Ctrl+S to save the file, and then press Ctrl+Q to exit the editor

Step 5:Run your application

  1. To run your application, use the following command in the Cloud Shell on the right:

    dotnet run

  2. If you don’t see any errors, your application has run successfully. To verify, you can just run the following command to get a list of files in the directory.

    ls -l

  3. You should get a response like the following example, and you should have the Shakespeare.wav file in the list of files

    ru8br6zg.jpg

Step 6: Listen to WAV file

To listen to the WAV file that your application created, you’ll first need to download it. To do so, you can just use the following steps.

  1. In the Cloud Shell on the right, use the following command to copy the WAV file to your temporary cloud drive:

    cp Shakespeare.wav ~/clouddrive

    dputtw5l.jpg

  2. In the Azure portal search box, type Storage account, then click on Storage account under Services.

    shyvtngw.jpg

  3. In the Storage accounts page, navigate and click on cloud storage account .

    eq320wdc.jpg

  4. In the Storage account page left-sided navigation menu, navigate to the Data storage section, then click on the File shares.

    p1b7cwn3.jpg

  5. Then select your cloudshellfilesXXX file share.

    xzj0kvaf.jpg

  6. When your cloudshellfilesXXX file shares page is displayed, select Browse, then select the Shakespeare.wav file, then select the Download icon.

    0yl4iwyw.jpg

    7godv6yk.jpg

  7. Download the Shakespeare.wav file to your computer, where you can listen to it with your operating system’s audio player.

    b1e0awv0.jpg

    w2pcp960.jpg

Conclusion:
Following the comprehensive instructions and the provided code snippets, you can seamlessly leverage Azure AI services to integrate text-to-speech capabilities into your applications. Azure AI empowers developers to enhance user experiences and streamline workflow processes. Embrace the power of Azure AI and unlock new possibilities for your projects.

Dive Deep: Unveiling DevExpress Splash Screen in Your Winforms App

Introduction:

In today's fast-paced digital world, user experience plays a pivotal role in the success of any application. One aspect that significantly contributes to a positive user experience is the loading screen or splash screen. A well-designed splash screen enhances the aesthetic appeal of your application and provides users with visual feedback during the loading process, reducing perceived wait times.

In this tutorial, we'll explore implementing a splash screen in a Winforms application using DevExpress, a powerful suite of UI controls and components. By the end of this tutorial, you'll have a sleek and professional-looking splash screen integrated into your Winforms application, enhancing its overall user experience.

Step 1: Setting Up Your Winforms Project

Before implementing the DevExpress splash screen, let's set up a basic Winforms project in Visual Studio.

  1. Open Visual Studio and create a new Winforms project.
  2. Name your project and choose a location to save it.
  3. Once the project is created, you'll see the default form in the designer view.

Step 2: Installing DevExpress

You must install the DevExpress NuGet package to use the DevExpress controls and components in your Winforms project.

  1. Right-click on your project in Solution Explorer.
  2. Select "Manage NuGet Packages" from the context menu.
  3. In the NuGet Package Manager, search for "DevExpress" and install the appropriate package for your project.

Step 3: Adding a Splash Screen Form

Now, let's create a new form for our splash screen.

  1. Right-click on your project in Solution Explorer.
  2. Select "Add DevExpress Item" from the context menu.
  3. Select "Splash Screen" from the DevExpres Template Gallery.
  4. Name the form "SplashScreenForm" and click "Add Item"
  5. Design your splash screen form using DevExpress controls to customize its appearance according to your preferences. You can add images, animations, and progress indicators to make it visually appealing.

Step 4: Configuring Application Startup

Next, we must configure our application to display the splash screen during startup.

  1. Open the Program.cs file in your project.

  2. Locate the Application.Run method, typically found within the Main method.

  3. Before calling Application.Run, create and display an instance of your splash screen form.

     static void Main()
     {
         Application.EnableVisualStyles();
         Application.SetCompatibleTextRenderingDefault(false);
     
         //Application.Run(new MainFormWithSplashScreenManager());
         var form = new MainForm();
         DevExpress.XtraSplashScreen.SplashScreenManager.ShowForm(form, typeof(SkinnedSplashScreen));
         //...
         //Authentication and other activities here
         Bootstrap.Initialize();                        
     
         DevExpress.XtraSplashScreen.SplashScreenManager.CloseForm();                      
         Application.Run(form);
     }
    
     internal class Bootstrap
     {
         internal static void Initialize()
         {
             // Add initialization logic here
             //Authentication and other activities here
             LoadResources();            
             
         }
         private static void LoadResources()
         {
             // Perform resource loading tasks
             // Example: Load configuration settings, connect to a database, etc.
    
             Thread.Sleep(1000);//For testing
         }
     }
     

Step 5: Adding Splash Screen Logic

Now that our splash screen is displayed during application startup let's add some logic to control its behaviour.

  1. Open the SplashScreenForm.cs file.

  2. Add any initialization logic or tasks that must be performed while the splash screen is displayed. For example, you can load resources, perform database connections, or initialize application settings.

     public partial class SkinnedSplashScreen : SplashScreen
     {
         public SkinnedSplashScreen()
         {
             InitializeComponent();
             this.labelCopyright.Text = "Copyright © 1998-" + DateTime.Now.Year.ToString();
         }
    
         #region Overrides
    
         public override void ProcessCommand(Enum cmd, object arg)
         {
             base.ProcessCommand(cmd, arg);
         }
    
         #endregion
    
         public enum SplashScreenCommand
         {
         }
    
         private void SkinnedSplashScreen_Load(object sender, EventArgs e)
         {
             
         }
     }
     

Step 6: Testing Your Application

With the splash screen implemented, it's time to test your Winforms application.

  1. Build your project to ensure there are no compilation errors.
  2. Run the application and observe the splash screen displayed during startup.
  3. Verify that the application functions correctly after the splash screen closes.

See the following topic for information on how to execute code when your application starts: How to: Perform Actions On Application Startup.

Conclusion:

In this tutorial, we've learned how to implement a splash screen in a Winforms application using DevExpress. By following these steps, you can enhance the user experience of your application by providing visual feedback during the loading process. You can customize the splash screen further to match the branding and style of your application and experiment with different animations and effects to create a memorable first impression for your users.

References
Splash Screen
Splash Screen Manager

Source Code

Leveraging Consul for Service Discovery in Microservices with .NET Core

Introduction:

In a microservices architecture, service discovery is pivotal in enabling seamless communication between services. Imagine having a multitude of microservices running across different ports and instances and the challenge of locating and accessing them dynamically. This is where the Consul comes into play.

Introduction to Consul:
Consul, a distributed service mesh solution, offers robust service discovery, health checking, and key-value storage features. In this tutorial, we’ll explore leveraging Consul for service discovery in a .NET Core environment. We’ll set up Consul, create a .NET Core API for service registration, and develop a console application to discover the API using Consul.

Step 1: Installing Consul:
Before integrating Consul into our .NET Core applications, we need to install Consul. Follow these steps to install Consul:

  1. Navigate to the Consul downloads page: Consul Downloads.
  2. Download the appropriate version of Consul for your operating system.
  3. Extract the downloaded archive to a location of your choice. 
  4. Add the Consul executable to your system’s PATH environment variable to run it from anywhere in the terminal or command prompt. 
  5. Open a terminal or command prompt and verify the Consul installation by running the command consul --version.
  6. Run the Consul server by running the command consul agent -dev

Step 2: Setting Up the Catalog API:

Now, let’s create a .NET Core API project named ServiceDiscoveryTutorials.CatalogApi. This API will act as a service that needs to be discovered by other applications. Use the following command to create the project:

dotnet new webapi -n ServiceDiscoveryTutorials.CatalogApi

Next, configure the API to register with the Consul upon startup. Add the Consul client package to the project:

dotnet add package Consul

In the Startup.cs file, configure Consul service registration in the ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();

    services.AddSingleton<IConsulClient>(p => new ConsulClient(consulConfig =>
    {
        var consulHost = builder.Configuration["Consul:Host"];
        var consulPort = Convert.ToInt32(builder.Configuration["Consul:Port"]);
        consulConfig.Address = new Uri($"http://{consulHost}:{consulPort}");
    }));
    
    services.AddSingleton<IServiceDiscovery, ConsulServiceDiscovery>();

}

Create a class named ConsulServiceDiscovery that implements the IServiceDiscovery interface to handle service registration:

public interface IServiceDiscovery
{
    Task RegisterServiceAsync(string serviceName, string serviceId, string serviceAddress, int servicePort);
    Task RegisterServiceAsync(AgentServiceRegistration serviceRegistration);
    
    Task DeRegisterServiceAsync(string serviceId);
}

public class ConsulServiceDiscovery : IServiceDiscovery
{
    private readonly IConsulClient _consulClient;

    public ConsulServiceDiscovery(IConsulClient consulClient)
    {
        _consulClient = consulClient;
    }

    public async Task RegisterServiceAsync(string serviceName, string serviceId, string serviceAddress, int servicePort)
    {
        var registration = new AgentServiceRegistration
        {
            ID = serviceId,
            Name = serviceName,
            Address = serviceAddress,
            Port = servicePort
        };
        await _consulClient.Agent.ServiceDeregister(serviceId);
        await _consulClient.Agent.ServiceRegister(registration);
    }

    public async Task DeRegisterServiceAsync(string serviceId)
    {
        await _consulClient.Agent.ServiceDeregister(serviceId);
    }

    public async Task RegisterServiceAsync(AgentServiceRegistration registration)
    {
        await _consulClient.Agent.ServiceDeregister(registration.ID);
        await _consulClient.Agent.ServiceRegister(registration);
    }
}

In the Configure method of Startup.cs, add the service registration logic:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env, IConsulClient consulClient)
{
    // Configure the HTTP request pipeline.
    if (app.Environment.IsDevelopment())
    {
        app.UseSwagger();
        app.UseSwaggerUI();
    }
    
    //app.UseHttpsRedirection();
    
    app.UseAuthorization();
    
    
    app.MapControllers();
    
    var discovery = app.Services.GetRequiredService<IServiceDiscovery>();
    var lifetime = app.Services.GetRequiredService<IHostApplicationLifetime>();
    var serviceName = "CatalogApi";
    var serviceId = Guid.NewGuid().ToString();
    var serviceAddress = "localhost";
    var servicePort = 7269;
    
    lifetime.ApplicationStarted.Register(async () =>
    {
        var registration = new AgentServiceRegistration
        {
            ID = serviceId,
            Name = serviceName,
            Address = serviceAddress,
            Port = servicePort,
            Check = new AgentServiceCheck
            {
                HTTP = $"https://{serviceAddress}:{servicePort}/Health",
                Interval = TimeSpan.FromSeconds(10),
                Timeout = TimeSpan.FromSeconds(5)
            }
        };
        await discovery.RegisterServiceAsync(registration);
    });
    
    lifetime.ApplicationStopping.Register(async () =>
    {
        await discovery.DeRegisterServiceAsync(serviceId);
    });

}

With these configurations, the Catalog API will register itself with the Consul upon startup and deregister upon shutdown.

Step 3: Creating the Client Application:

Next, create a console application named ServiceDiscoveryTutorials.ClientApp. Use the following command to create the project:

dotnet new console -n ServiceDiscoveryTutorials.ClientApp

Add the Consul client package to the project:

dotnet add package Consul

In the Program.cs file, configure the Consul client to discover services:

class Program
{
    static async Task Main(string[] args)
    {
        using (var client = new ConsulClient(consulConfig =>
        {
            consulConfig.Address = new Uri("http://localhost:8500");
        }))
        {
            var services = await client.Catalog.Service("CatalogApi");
            foreach (var service in services.Response)
            {
                Console.WriteLine($"Service ID: {service.ServiceID}, Address: {service.ServiceAddress}, Port: {service.ServicePort}");
            }
        }
        //var consulClient = new ConsulClient();
        //// Specify the service name to discover
        //string serviceName = "CatalogApi";
        //// Query Consul for healthy instances of the service
        //var services = (await consulClient.Health.Service(serviceName, tag: null, passingOnly: true)).Response;
        //// Iterate through the discovered services
        //foreach (var service in services)
        //{
        //    var serviceAddress = service.Service.Address;
        //    var servicePort = service.Service.Port;
        //    Console.WriteLine($"Found service at {serviceAddress}:{servicePort}");
        //    // You can now use the serviceAddress and servicePort to communicate with the discovered service.
        //}

    }
}

This code snippet retrieves all instances of the CatalogApi service registered with the Consul.

Step 3: Testing the API and Client Application:

Below is the project structure in the Visual Studio. 

Next, let’s run both applications using the command dotnet run. When this application starts, the Consul portal will display the registered service. 

Below is the final results of the application.

Conclusion:
In this tutorial, we’ve learned how to set up Consul for service discovery and register a .NET Core API with Consul. Additionally, we’ve developed a console application to discover services using Consul’s API. By leveraging Consul, you can enhance the scalability and reliability of your microservices architecture.

Source Code