From Stories to Epics: A Guide to Building a Customer-Centric Product Backlog

Introduction

In Agile development, creating a Product Backlog isn’t just about listing tasks; it’s about crafting a roadmap that aligns with the user’s needs and the Product Owner’s vision. This roadmap is made up of user stories and epics—two essential tools that ensure every feature delivers value and keeps the user at the heart of the process.

This blog explores the importance of user stories and epics, how to write them effectively, and their role in creating a seamless user experience. Whether you’re a seasoned Scrum practitioner or new to Agile, understanding these concepts is key to building a high-performing Backlog.

Objective

By the end of this blog, you’ll understand:

  • What are user stories and epics, and how do they differ?
  • The essential elements of a user story, including personas and the I.N.V.E.S.T. framework.
  • How epics organize related user stories for better Backlog management.
  • The role of acceptance criteria in defining done for user stories.

What Are User Stories?

User stories are brief, user-centred descriptions of a feature or requirement. They emphasize the user’s perspective, ensuring the team keeps the user’s goals and experiences at the forefront. A typical user story follows this format:
As a <user role>, I want this <action> so that I can get this <value>.

For example:
As an avid reader, I want to read reviews before checking out a book to know I’ll enjoy my selection.

Elements of a User Story

When writing user stories, consider the following components:

  1. User Persona: Define your user and their relationship to the product.
  2. Definition of Done: Outline what must be completed for the story to be considered finished.
  3. Tasks: Identify key activities required to implement the story.
  4. Feedback: Incorporate past feedback to refine features.

The I.N.V.E.S.T. Framework

Effective user stories adhere to the I.N.V.E.S.T. criteria:

  • Independent: Can be completed without relying on other stories.
  • Negotiable: Flexible enough to discuss and refine.
  • Valuable: Provides clear value to the user or business.
  • Estimable: Easily broken into tasks and estimated.
  • Small: Fits within a single Sprint.
  • Testable: Meets predefined acceptance criteria.

What Are Epics?

An epic is a collection of related user stories representing a large body of work. Think of user stories as individual chapters, while an epic is the entire book. For instance:

  • Epic: Website Creation
    • User Story 1: Customers can read book reviews online.
    • User Story 2: Customers can add books to their cart for borrowing.

Epics structure the Backlog, allowing teams to manage high-level ideas without diving into excessive detail upfront.

Writing Epics and Stories

Let’s say you’re creating a website for a library. Your epic might be “Website Creation.” Under this epic, individual user stories could include:

  1. As a user, I want to read reviews before borrowing books to choose what I like.
  2. As a user, I want to see recommendations based on my reading history to discover new books.

For the physical library space, another epic like “Organization of Physical Space” might include:

  1. As a visitor, I want clear signage to find the non-fiction section easily.

Acceptance Criteria for User Stories

Every user story must meet its acceptance criteria to be considered complete. For example, for a library website:

  • Users can browse reviews of at least 10 books.
  • Users can filter books by genre or rating.
  • Reviews include a verified purchase badge for authenticity.

Conclusion

User stories and epics are essential tools for creating a customer-centric Product Backlog. Focusing on user needs ensures that every feature delivers value and aligns with the product vision. The structured approach provided by the I.N.V.E.S.T. framework and the organization offered by epics enables teams to prioritize, collaborate, and execute effectively.

Whether writing a single-user story or planning an epic, remember that every detail you define today helps your team build better products tomorrow. With these principles in mind, you’re ready to create Backlogs that guide development and delight your users.


Sponsor: Elevate your business with Arise Informatics Solutions. Empowering you with tailored strategies, cutting-edge technologies, and trusted partnerships to drive innovation and growth. Partner with Arise to shape a smarter tomorrow! Contact Arise today.

Git Remote Management: Add, Rename, Change, and Remove Like a Pro

Introduction

Git is an essential version control system for developers. One of its most powerful features is its ability to work with remote repositories, allowing teams to collaborate seamlessly across geographies. Remote repositories, typically hosted on platforms like GitHub, provide a centralized location to push, pull, and share code.

In this article, we will dive into remote repository management in Git. Practical examples teach you how to add, rename, change, and remove remote repositories. By the end, you’ll have the knowledge and confidence to manage remotes like a pro.

Objective

The goal of this blog post is to guide beginner developers and software engineers through the process of managing remote repositories in Git. Specifically, you’ll learn to:

  • Add a new remote repository to your local Git project.
  • Rename existing remotes for better organization.
  • Change the URL of a remote to update connection details.
  • Remove remotes that are no longer in use.

Whether you’re working on a personal project or contributing to a team on GitHub, understanding these Git commands will significantly improve your workflow.


1. Adding a Remote Repository

A remote repository is a version of your project hosted on an external server, such as GitHub. You need to link your local repository to this remote in order to synchronize changes.

Command: git remote add

To add a new remote to your Git repository, use the following syntax:

git remote add <remote_name> <remote_url>
Example:

Let’s say you want to add a new remote named origin for your GitHub repository:

To verify that the remote has been added successfully, use:

git remote -v

Output:

Troubleshooting: “Remote origin already exists”

If you encounter the error:

fatal: remote origin already exists.

It means that a remote with the name origin has already been added. To resolve this:

  • Rename the existing remote (explained in the next section), or
  • Use a different remote name.

2. Renaming a Remote Repository

You might want to rename a remote for better clarity or organization, especially when you work with multiple remotes.

Command: git remote rename

To rename an existing remote, use:

git remote rename <old_name> <new_name>
  • <old_name>: The current name of the remote (e.g., origin).
  • <new_name>: The new name for the remote (e.g., upstream).
Example:

Let’s rename a remote called origin to upstream:

git remote rename origin upstream

Verify the change using:

git remote -v

Output:

Troubleshooting: “Remote [old_name] does not exist”

If the old remote name is incorrect or does not exist, you’ll get this error:

fatal: Could not rename config section 'remote.[old_name]' to 'remote.[new_name]'

Ensure the correct remote name by listing existing remotes:

git remote -v

3. Changing a Remote Repository’s URL

There are times when you need to change the URL of a remote, such as switching from HTTPS to SSH for authentication or moving the repository to a new location.

Command: git remote set-url

To update a remote URL, use:

git remote set-url <remote_name> <new_url>
  • <remote_name>: The name of the remote (e.g., origin).
  • <new_url>: The new URL for the remote repository.
Example:

Let’s update the origin remote to switch from HTTPS to SSH:

git remote set-url origin git@github.com:yourusername/your-repo.git

Verify the change:

git remote -v

Output:

origin  git@github.com:yourusername/your-repo.git (fetch)
origin  git@github.com:yourusername/your-repo.git (push)
Troubleshooting: “No such remote ‘[name]’”

If the specified remote does not exist, you’ll encounter:

fatal: No such remote '[name]'

Double-check the name of the remote with:

git remote -v

4. Removing a Remote Repository

You may need to remove a remote when it’s irrelevant or the repository has been moved elsewhere.

Command: git remote rm

To remove a remote, use:

git remote rm <remote_name>
  • <remote_name>: The name of the remote you want to remove.
Example:

Let’s remove a remote named upstream:

git remote rm upstream

Verify that it has been removed:

git remote -v

Output:

Troubleshooting: “Could not remove config section ‘remote.[name]’”

This error means the remote you tried to remove does not exist:

error: Could not remove config section 'remote.[name]'

Double-check the remote’s existence by listing all remotes:

git remote -v

Conclusion

Mastering remote repository management in Git is a critical skill for any developer. Learning how to add, rename, change, and remove remotes ensures that your workflow stays organized, flexible, and efficient. Whether you’re working solo or collaborating with a team, these commands will help you easily handle repository remotes.

With this knowledge, you can push, pull, and clone repositories like a pro!

Real-Time Speech Translation with Azure: A Quick Guide

Introduction

The Azure Speech Translation service enables real-time, multi-language speech-to-speech and speech-to-text translation of audio streams. In this article, you will learn how to run an application to translate speech from one language to text in another language using Azure’s powerful tools.

Objective

By the end of this article, you will be able to create and deploy an application that translates speech from one language to text in another language.

Step 1: Creating a New Azure Cognitive Services Resource Using Azure Portal

Task 1: Create Azure Cognitive Speech Service Resource

  1. Open a tab in your browser and go to the Speech Services page. If prompted, sign in with your Azure credentials.

  2. On the Create page, provide the following information and click on Review + create:

    • Subscription: Select your subscription (this will be selected by default).
    • Resource group: Create a new group named azcogntv-rg1. Click on OK.
    • Region: East US
    • Name: CognitiveSpeechServicesResource
    • Pricing tier: Free F0

    Create Speech Service

    Create Speech Service

  3. Once the validation passes, click on Create.

    Validation Passed

  4. Wait for the deployment to complete, then click on Go to resource.

    Deployment Complete

  5. Click on Keys and Endpoint from the left navigation menu. Copy and save Key 1 and Endpoint values in a notepad for later use.

    Keys and Endpoint

Task 2: Create Azure Cognitive Language Service Resource

  1. Open a new browser tab and go to the Language Services page. Sign in with your Azure credentials.

  2. Without selecting any option on the page, click on Continue to create your resource.

    Continue to Create Resource

  3. Update with the following details and then click on Review + Create:

    • Subscription: Your Azure subscription
    • Resource Group: Select azcogntv-rg1
    • Region: East US
    • Name: CognitivelanguageResourceXX (Replace XX with any random number)
    • Pricing tier: Free (F0)
    • Select checkbox: By checking this box, I certify that I have reviewed and acknowledged the Responsible AI Notice terms.

    Create Language Service Create Language Service

  4. Review the resource details and then click on Create.

    Review and Create

  5. Wait for the deployment to complete, and once successful, click on Go to resource group.

    Deployment Successful

  6. Click on CognitiveLanguageResource.

    Cognitive Language Resource

  7. Click on Keys and Endpoints > Show keys. Copy Key 1 and endpoint values and save them in a notepad for later use.

    Keys and Endpoints

Step 2: Recognizing and Translating Speech to Text

Task 1: Set Environment Variables

Your application must be authenticated to access Cognitive Services resources. Use environment variables to store your credentials securely.

  1. Open Command Prompt and run mkdir Speech-to-Text to create a directory. Then run cd Speech-to-Text to navigate into it.

    mkdir Speech-to-Text
    cd Speech-to-Text
    
  2. To set the SPEECH_KEY environment variable, replace your-key with one of the keys for your resource saved earlier.

    setx SPEECH_KEY your-key
    setx SPEECH_REGION eastus
    

    Set Environment Variables

  3. After adding the environment variables, restart any running programs that need to read the environment variable, including the console window. Close the Command Prompt and open it again.

Task 2: Translate Speech from a Microphone

  1. Open Command Prompt, navigate to your directory (cd Speech-to-Text), and create a console application with the .NET CLI.

    dotnet new console
    

    Create Console App

  2. Install the Speech SDK in your new project with the .NET CLI.

    dotnet add package Microsoft.CognitiveServices.Speech
    

    Install Speech SDK

  3. Open the Program.cs file in Notepad from the Speech-to-Text project folder. Replace the contents of Program.cs with the following code:

    using System;
    using System.IO;
    using System.Threading.Tasks;
    using Microsoft.CognitiveServices.Speech;
    using Microsoft.CognitiveServices.Speech.Audio;
    using Microsoft.CognitiveServices.Speech.Translation;
    
    class Program
    {
        // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
        static string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY");
        static string speechRegion = Environment.GetEnvironmentVariable("SPEECH_REGION");
    
        static void OutputSpeechRecognitionResult(TranslationRecognitionResult translationRecognitionResult)
        {
            switch (translationRecognitionResult.Reason)
            {
                case ResultReason.TranslatedSpeech:
                    Console.WriteLine($"RECOGNIZED: Text={translationRecognitionResult.Text}");
                    foreach (var element in translationRecognitionResult.Translations)
                    {
                        Console.WriteLine($"TRANSLATED into '{element.Key}': {element.Value}");
                    }
                    break;
                case ResultReason.NoMatch:
                    Console.WriteLine($"NOMATCH: Speech could not be recognized.");
                    break;
                case ResultReason.Canceled:
                    var cancellation = CancellationDetails.FromResult(translationRecognitionResult);
                    Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");
    
                    if (cancellation.Reason == CancellationReason.Error)
                    {
                        Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
                        Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");
                        Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
                    }
                    break;
            }
        }
    
        async static Task Main(string[] args)
        {
            var speechTranslationConfig = SpeechTranslationConfig.FromSubscription(speechKey, speechRegion);
            speechTranslationConfig.SpeechRecognitionLanguage = "en-US";
            speechTranslationConfig.AddTargetLanguage("it");
    
            using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
            using var translationRecognizer = new TranslationRecognizer(speechTranslationConfig, audioConfig);
    
            Console.WriteLine("Speak into your microphone.");
            var translationRecognitionResult = await translationRecognizer.RecognizeOnceAsync();
            OutputSpeechRecognitionResult(translationRecognitionResult);
        }
    }
    
  4. Run your new console application to start speech recognition from a microphone:

    dotnet run
    
  5. Speak into your microphone when prompted. What you speak should be output as translated text in the target language:

    Speak this: The Speech service provides speech-to-text and text-to-speech capabilities with an Azure Speech resource and then press Enter.
    

    Speak Into Microphone

Conclusion

In this article, you translated speech from a microphone to a different language by updating the code in the Program.cs file. This powerful feature of Azure Cognitive Services allows for seamless and real-time translation, making it a valuable tool for various applications and industries.

Source Code

Quick Start: Create and Deploy C# Functions in Azure Using CLI

Introduction

This blog post will guide you through creating and deploying a C# function to Azure using command-line tools. This article will walk you through creating an HTTP-triggered function that runs on .NET 8 in an isolated worker process. By the end of this post, you will have a functional Azure Function that responds to HTTP requests.

Objective

In this article, you will:

  1. Install the Azure Functions Core Tools.
  2. Create a C# function that responds to HTTP requests.
  3. Test the function locally.
  4. Deploy the function to Azure.
  5. Access the function in Azure.

Prerequisites

Ensure you have the following installed:

  • Azure CLI (version 2.4 or later)
  • .NET SDK (version 6.0 and 8.0)
  • Azure Functions Core Tools (version 4.x)

Step 0: Install Azure Functions Core Tools

  1. Uninstall previous versions (if any):
    • Open the Settings from the Start menu.
    • Select Apps.
    • Click on Installed apps.
    • Find Azure Function Core Tools, click the three dots next to it, and select Uninstall.
  2. Install the latest version:
    • Navigate to the Azure Function Core Tools downloads page: Install the Azure Functions Core Tools.
    • Download the appropriate version of Azure Functions Core Tools for your operating system. (Recommended. Visual Studio Code debugging requires 64-bit.) 
    • Follow the prompts: Click Next, accept the agreement, and click Install.

    • Click Finish once the installation is complete.

Step 1: Prerequisite Check

  1. Open Command Prompt and execute the following commands to verify your setup:
    • func --version – This is to check that the Azure Functions Core Tools are version 4.x.
    • dotnet --list-sdks – This checks that the required versions are installed. It should be 6.0 and 8.0
    • az --version to check that the Azure CLI version is 2.4 or later.

  2. az login to sign in to Azure and verify an active subscription. Select your login in the browser that opens up. Log in to Azure when prompted:
    • A browser window will open. You can just select your Azure account to sign in.
    • The command prompt will display your Azure login details.

Step 2: Create a Local Function Project

  1. Initialize the function project: Run the func init command, as follows, to create a functions project in a folder named LocalFunctionProj with the specified runtime:
    func init LocalFunctionProj --worker-runtime dotnet-isolated --target-framework net8.0
    cd LocalFunctionProj
    

    This folder contains various files for the project, including configuration files named local.settings.json and host.json. Because local.settings.json can contain secrets downloaded from Azure, the .gitignore file excludes it from source control by default.

  2. Add a new HTTP-triggered function:
    func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous"
    

  3. Examine the generated code:
     using Microsoft.Azure.Functions.Worker;
     using Microsoft.Extensions.Logging;
     using Microsoft.AspNetCore.Http;
     using Microsoft.AspNetCore.Mvc;
        
     namespace LocalFunctionProj
     {
         public class HttpExample
         {
             private readonly ILogger<HttpExample> _logger;
        
             public HttpExample(ILogger<HttpExample> logger)
             {
                 _logger = logger;
             }
        
             [Function("HttpExample")]
             public IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req)
             {
                 _logger.LogInformation("C# HTTP trigger function processed a request.");
                 return new OkObjectResult("Welcome to Azure Functions!");
             }
         }
     }
    

Step 3: Run the Function Locally

  1. Start the local Azure Functions runtime host: Run your function by starting the local Azure Functions runtime host from the LocalFunctionProj folder:
    func start
    

    The output says that the worker process started and initialized. The function’s URL is also displayed.

  2. Test the function:
    • Copy the URL from the output and paste it into a browser.
    • You should see a message: “Welcome to Azure Functions!”

  3. Stop the function host:
    • Press Ctrl+C and confirm with y.

Step 4: Create Supporting Azure Resources

Before you can deploy your function code to Azure, you need to create three resources:

  • A resource group, which is a logical container for related resources.

  • A Storage account, which is used to maintain state and other information about your functions.

  • A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.

You can use the following commands to create these items. Both Azure CLI and PowerShell are supported.

  1. Sign in to Azure:

    If you haven’t done so, sign in to Azure: The az login command signs you into your Azure account.

     az login
    
  2. Create a resource group:

    Create a resource group named RGForFunctionApp in your chosen region:

    az group create --name RGForFunctionApp --location eastus
    

    The az group create command creates a resource group and populates the command shell with the created RG details with the Provisioning state - Succeeded

  3. Create a storage account:

    Create a general-purpose storage account in your resource group and region.

    az storage account create --name storaccforazfunc07 --location eastus --resource-group RGForFunctionApp --sku Standard_LRS --allow-blob-public-access false
    

    The az storage account create command creates a storage account named storaccforazfunc07 in the EastUS region. The details are populated in the command prompt, and the provisioning state is succeeded.

     

  4. Create the function app:

    Create the function app in Azure: Execute the below command:

    az functionapp create --resource-group RGForFunctionApp --consumption-plan-location eastus --runtime dotnet-isolated --functions-version 4 --name appforfunc07 --storage-account storaccforazfunc07
    

    The az functionapp create command creates the function app in Azure.

    • storaccforazfunc07 is the storage account that we created in the previous step.

    • appforfunc07 is the name of the app that we create here. It needs to be globally unique.

Step 5: Deploy the Function Project to Azure

After successfully creating your function app in Azure, you’re ready to deploy your local functions project using the func azure functionapp publish command.

  1. Deploy the function:
    func azure functionapp publish appforfunc07
    

    After deployment, a URL will be provided. This is the Invoke URL for your function.

Step 6: Invoke the Function on Azure

  1. Invoke the function using a browser:
    • Copy the Invoke URL and paste it into a browser.
    • You should see the same “Welcome to Azure Functions!” message.

  2. View real-time logs:

    Call the remote function again in a separate terminal window or in the browser. The terminal shows a verbose log of the function’s execution in Azure.

    func azure functionapp logstream appforfunc07
    
    • Open another terminal or browser window and call the function URL again to see the real-time logs.
    • Press Ctrl+C to end the logstream session.

Step 7: Clean Up Resources

  1. Delete the resource group:

    Execute the following command to delete the resource group and all its contained resources. Confirm if you want to perform this operation and hit Enter.

    az group delete --name RGForFunctionApp
    
Conclusion

Congratulations! Using command-line tools, you’ve successfully created, tested, and deployed a C# function to Azure. This step-by-step guide has walked you through installing necessary tools, setting up a local development environment, creating and running a function locally, deploying it to Azure, and finally cleaning up the resources. Azure Functions provides a robust, serverless compute environment to build and quickly deploy scalable, event-driven applications. Happy coding!

 

Unleashing the Power of Azure AI: A Comprehensive Guide to Text-to-Speech Applications

Introduction:

In the dynamic landscape of application development, efficiency and cost-effectiveness are paramount. For developers working on projects that involve video narration, traditional methods of hiring vocal talent and managing studio resources can be both cumbersome and expensive. Enter Microsoft’s Azure AI services, offering a suite of APIs that empower developers to integrate cutting-edge text-to-speech capabilities into their applications. In this comprehensive guide, we’ll delve into the intricacies of Azure AI, providing step-by-step instructions and code snippets to help you harness its full potential for creating text-to-speech applications.

Objective:

  • Establish an Azure AI services account
  • Develop a command-line application for text-to-speech conversion using plain text
  • Provide detailed insights and code snippets for each stage of the process

Creating a Text-to-Speech Application using a Text File

Step 1: Creating an Azure AI Services Account

  1. Begin by navigating to the Azure portal and signing in with your credentials.
  2. Once logged in, locate the Azure AI services section and proceed to create a new account.

  3. In the Create Azure AI window, under the Basics tab, enter the following details and click on the Review+create button.

  4. In the Review+submit tab, once the Validation is Passed, click on the Create button.

  5. Wait for the deployment to complete. The deployment will take around 2-3 minutes.
  6. After the deployment is completed, click on the Go to resource button.

  7. In your AzureAI-text-speechXX window, navigate to the Resource Management section and click on Keys and Endpoints.

  8. Configure the account settings according to your requirements.In the Keys and Endpoints page, copy KEY1, Region, and Endpoint values and paste them into a notepad as shown in the below image, then Save the notepad for later use.

Step 2: Create your text to speech application

  1. In the Azure portal, click on the [>_] (Cloud Shell) button at the top of the page to the right of the search box. A Cloud Shell pane will open at the bottom of the portal. The first time you open the Cloud Shell, you may be prompted to choose the type of shell you want to use (Bash or PowerShell). Select Bash. If you don’t see this option, then you can go ahead and skip this step.

    6wavjic5.jpg

  2. In You have no storage mounted dialog box, click on the Create storage.

    i8pikt8d.jpg

  3. Ensure the type of shell indicated on the top left of the Cloud Shell pane is switched to Bash. If it’s PowerShell, switch to Bash by using the drop-down menu.

    qbb1qkgf.jpg

  4. In the Cloud Shell on the right, create a directory for your application, then switch folders to your new folder. Enter the following command

    mkdir text-to-speech
    cd text-to-speech
    

    s1xdm90b.jpg

  5. Enter the following command to create a new .NET Core application. This command should take a few seconds to complete.

    dotnet new console

    lfvq8jtr.jpg

  6. When your .NET Core application has been created, add the Speech SDK package to your application. This command should take a few seconds to complete.

    dotnet add package Microsoft.CognitiveServices.Speech

    0wp2tdec.jpg

Step 3:Add the code for your text to speech application

  1. In the Cloud Shell on the right, open the Program.cs file using the following command.

        code Program.cs
    
  2. Replace the existing code with the following using statements, which enable the Azure AI Speech APIs for your application:

    using System.Text;
    using Microsoft.CognitiveServices.Speech;
    using Microsoft.CognitiveServices.Speech.Audio;
    

    uo6fzs9i.jpg

  3. Below the using statements, add the following code, which uses Azure AI Speech APIs to convert the contents of the text file you’ll create into a WAV file with the synthesized voice. Replace the azureKey and azureLocation values with the ones you copied in the last task 1.

    string azureKey = "ENTER YOUR KEY FROM THE FIRST EXERCISE";
    string azureLocation = "ENTER YOUR LOCATION FROM THE FIRST EXERCISE";
    string textFile = "Shakespeare.txt";
    string waveFile = "Shakespeare.wav";
        
    try
    {
        FileInfo fileInfo = new FileInfo(textFile);
        if (fileInfo.Exists)
        {
            string textContent = File.ReadAllText(fileInfo.FullName);
            var speechConfig = SpeechConfig.FromSubscription(azureKey, azureLocation);
            using var speechSynthesizer = new SpeechSynthesizer(speechConfig, null);
            var speechResult = await speechSynthesizer.SpeakTextAsync(textContent);
            using var audioDataStream = AudioDataStream.FromResult(speechResult);
            await audioDataStream.SaveToWaveFileAsync(waveFile);       
        }
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
        
    }
    

    nq1qs7oa.jpg

  4. This code uses your key and location to initialize a connection to Azure AI services, then reads the contents of the text file you\‘ll create, then uses the SpeakTextAsync() method of the speech synthesizer to convert the text to audio, then uses an audio stream to save the results to an audio file.
  5. To save your changes, press Ctrl+S to save the file, and then press Ctrl+Q to exit the editor

Step 4: Create a text file for your application to read

  1. In the Cloud Shell on the right, create a new text file that your application will read:

    code Shakespeare.txt

  2. When the code editor appears, enter the following text.

    The following quotes are from act 2, scene 7, of William Shakespeare's play "As You Like It."
        
    Thou seest we are not all alone unhappy:
    This wide and universal theatre
    Presents more woeful pageants than the scene
    Wherein we play in.
        
    All the world's a stage,
    And all the men and women merely players:
    They have their exits and their entrances;
    And one man in his time plays many parts,
    His acts being seven ages.
    

    dbjulb6c.jpg

  3. To save your changes, press Ctrl+S to save the file, and then press Ctrl+Q to exit the editor

Step 5:Run your application

  1. To run your application, use the following command in the Cloud Shell on the right:

    dotnet run

  2. If you don’t see any errors, your application has run successfully. To verify, you can just run the following command to get a list of files in the directory.

    ls -l

  3. You should get a response like the following example, and you should have the Shakespeare.wav file in the list of files

    ru8br6zg.jpg

Step 6: Listen to WAV file

To listen to the WAV file that your application created, you’ll first need to download it. To do so, you can just use the following steps.

  1. In the Cloud Shell on the right, use the following command to copy the WAV file to your temporary cloud drive:

    cp Shakespeare.wav ~/clouddrive

    dputtw5l.jpg

  2. In the Azure portal search box, type Storage account, then click on Storage account under Services.

    shyvtngw.jpg

  3. In the Storage accounts page, navigate and click on cloud storage account .

    eq320wdc.jpg

  4. In the Storage account page left-sided navigation menu, navigate to the Data storage section, then click on the File shares.

    p1b7cwn3.jpg

  5. Then select your cloudshellfilesXXX file share.

    xzj0kvaf.jpg

  6. When your cloudshellfilesXXX file shares page is displayed, select Browse, then select the Shakespeare.wav file, then select the Download icon.

    0yl4iwyw.jpg

    7godv6yk.jpg

  7. Download the Shakespeare.wav file to your computer, where you can listen to it with your operating system’s audio player.

    b1e0awv0.jpg

    w2pcp960.jpg

Conclusion:
Following the comprehensive instructions and the provided code snippets, you can seamlessly leverage Azure AI services to integrate text-to-speech capabilities into your applications. Azure AI empowers developers to enhance user experiences and streamline workflow processes. Embrace the power of Azure AI and unlock new possibilities for your projects.

Dive Deep: Unveiling DevExpress Splash Screen in Your Winforms App

Introduction:

In today's fast-paced digital world, user experience plays a pivotal role in the success of any application. One aspect that significantly contributes to a positive user experience is the loading screen or splash screen. A well-designed splash screen enhances the aesthetic appeal of your application and provides users with visual feedback during the loading process, reducing perceived wait times.

In this tutorial, we'll explore implementing a splash screen in a Winforms application using DevExpress, a powerful suite of UI controls and components. By the end of this tutorial, you'll have a sleek and professional-looking splash screen integrated into your Winforms application, enhancing its overall user experience.

Step 1: Setting Up Your Winforms Project

Before implementing the DevExpress splash screen, let's set up a basic Winforms project in Visual Studio.

  1. Open Visual Studio and create a new Winforms project.
  2. Name your project and choose a location to save it.
  3. Once the project is created, you'll see the default form in the designer view.

Step 2: Installing DevExpress

You must install the DevExpress NuGet package to use the DevExpress controls and components in your Winforms project.

  1. Right-click on your project in Solution Explorer.
  2. Select "Manage NuGet Packages" from the context menu.
  3. In the NuGet Package Manager, search for "DevExpress" and install the appropriate package for your project.

Step 3: Adding a Splash Screen Form

Now, let's create a new form for our splash screen.

  1. Right-click on your project in Solution Explorer.
  2. Select "Add DevExpress Item" from the context menu.
  3. Select "Splash Screen" from the DevExpres Template Gallery.
  4. Name the form "SplashScreenForm" and click "Add Item"
  5. Design your splash screen form using DevExpress controls to customize its appearance according to your preferences. You can add images, animations, and progress indicators to make it visually appealing.

Step 4: Configuring Application Startup

Next, we must configure our application to display the splash screen during startup.

  1. Open the Program.cs file in your project.

  2. Locate the Application.Run method, typically found within the Main method.

  3. Before calling Application.Run, create and display an instance of your splash screen form.

     static void Main()
     {
         Application.EnableVisualStyles();
         Application.SetCompatibleTextRenderingDefault(false);
     
         //Application.Run(new MainFormWithSplashScreenManager());
         var form = new MainForm();
         DevExpress.XtraSplashScreen.SplashScreenManager.ShowForm(form, typeof(SkinnedSplashScreen));
         //...
         //Authentication and other activities here
         Bootstrap.Initialize();                        
     
         DevExpress.XtraSplashScreen.SplashScreenManager.CloseForm();                      
         Application.Run(form);
     }
    
     internal class Bootstrap
     {
         internal static void Initialize()
         {
             // Add initialization logic here
             //Authentication and other activities here
             LoadResources();            
             
         }
         private static void LoadResources()
         {
             // Perform resource loading tasks
             // Example: Load configuration settings, connect to a database, etc.
    
             Thread.Sleep(1000);//For testing
         }
     }
     

Step 5: Adding Splash Screen Logic

Now that our splash screen is displayed during application startup let's add some logic to control its behaviour.

  1. Open the SplashScreenForm.cs file.

  2. Add any initialization logic or tasks that must be performed while the splash screen is displayed. For example, you can load resources, perform database connections, or initialize application settings.

     public partial class SkinnedSplashScreen : SplashScreen
     {
         public SkinnedSplashScreen()
         {
             InitializeComponent();
             this.labelCopyright.Text = "Copyright © 1998-" + DateTime.Now.Year.ToString();
         }
    
         #region Overrides
    
         public override void ProcessCommand(Enum cmd, object arg)
         {
             base.ProcessCommand(cmd, arg);
         }
    
         #endregion
    
         public enum SplashScreenCommand
         {
         }
    
         private void SkinnedSplashScreen_Load(object sender, EventArgs e)
         {
             
         }
     }
     

Step 6: Testing Your Application

With the splash screen implemented, it's time to test your Winforms application.

  1. Build your project to ensure there are no compilation errors.
  2. Run the application and observe the splash screen displayed during startup.
  3. Verify that the application functions correctly after the splash screen closes.

See the following topic for information on how to execute code when your application starts: How to: Perform Actions On Application Startup.

Conclusion:

In this tutorial, we've learned how to implement a splash screen in a Winforms application using DevExpress. By following these steps, you can enhance the user experience of your application by providing visual feedback during the loading process. You can customize the splash screen further to match the branding and style of your application and experiment with different animations and effects to create a memorable first impression for your users.

References
Splash Screen
Splash Screen Manager

Source Code

Leveraging Consul for Service Discovery in Microservices with .NET Core

Introduction:

In a microservices architecture, service discovery is pivotal in enabling seamless communication between services. Imagine having a multitude of microservices running across different ports and instances and the challenge of locating and accessing them dynamically. This is where the Consul comes into play.

Introduction to Consul:
Consul, a distributed service mesh solution, offers robust service discovery, health checking, and key-value storage features. In this tutorial, we’ll explore leveraging Consul for service discovery in a .NET Core environment. We’ll set up Consul, create a .NET Core API for service registration, and develop a console application to discover the API using Consul.

Step 1: Installing Consul:
Before integrating Consul into our .NET Core applications, we need to install Consul. Follow these steps to install Consul:

  1. Navigate to the Consul downloads page: Consul Downloads.
  2. Download the appropriate version of Consul for your operating system.
  3. Extract the downloaded archive to a location of your choice. 
  4. Add the Consul executable to your system’s PATH environment variable to run it from anywhere in the terminal or command prompt. 
  5. Open a terminal or command prompt and verify the Consul installation by running the command consul --version.
  6. Run the Consul server by running the command consul agent -dev

Step 2: Setting Up the Catalog API:

Now, let’s create a .NET Core API project named ServiceDiscoveryTutorials.CatalogApi. This API will act as a service that needs to be discovered by other applications. Use the following command to create the project:

dotnet new webapi -n ServiceDiscoveryTutorials.CatalogApi

Next, configure the API to register with the Consul upon startup. Add the Consul client package to the project:

dotnet add package Consul

In the Startup.cs file, configure Consul service registration in the ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();

    services.AddSingleton<IConsulClient>(p => new ConsulClient(consulConfig =>
    {
        var consulHost = builder.Configuration["Consul:Host"];
        var consulPort = Convert.ToInt32(builder.Configuration["Consul:Port"]);
        consulConfig.Address = new Uri($"http://{consulHost}:{consulPort}");
    }));
    
    services.AddSingleton<IServiceDiscovery, ConsulServiceDiscovery>();

}

Create a class named ConsulServiceDiscovery that implements the IServiceDiscovery interface to handle service registration:

public interface IServiceDiscovery
{
    Task RegisterServiceAsync(string serviceName, string serviceId, string serviceAddress, int servicePort);
    Task RegisterServiceAsync(AgentServiceRegistration serviceRegistration);
    
    Task DeRegisterServiceAsync(string serviceId);
}

public class ConsulServiceDiscovery : IServiceDiscovery
{
    private readonly IConsulClient _consulClient;

    public ConsulServiceDiscovery(IConsulClient consulClient)
    {
        _consulClient = consulClient;
    }

    public async Task RegisterServiceAsync(string serviceName, string serviceId, string serviceAddress, int servicePort)
    {
        var registration = new AgentServiceRegistration
        {
            ID = serviceId,
            Name = serviceName,
            Address = serviceAddress,
            Port = servicePort
        };
        await _consulClient.Agent.ServiceDeregister(serviceId);
        await _consulClient.Agent.ServiceRegister(registration);
    }

    public async Task DeRegisterServiceAsync(string serviceId)
    {
        await _consulClient.Agent.ServiceDeregister(serviceId);
    }

    public async Task RegisterServiceAsync(AgentServiceRegistration registration)
    {
        await _consulClient.Agent.ServiceDeregister(registration.ID);
        await _consulClient.Agent.ServiceRegister(registration);
    }
}

In the Configure method of Startup.cs, add the service registration logic:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env, IConsulClient consulClient)
{
    // Configure the HTTP request pipeline.
    if (app.Environment.IsDevelopment())
    {
        app.UseSwagger();
        app.UseSwaggerUI();
    }
    
    //app.UseHttpsRedirection();
    
    app.UseAuthorization();
    
    
    app.MapControllers();
    
    var discovery = app.Services.GetRequiredService<IServiceDiscovery>();
    var lifetime = app.Services.GetRequiredService<IHostApplicationLifetime>();
    var serviceName = "CatalogApi";
    var serviceId = Guid.NewGuid().ToString();
    var serviceAddress = "localhost";
    var servicePort = 7269;
    
    lifetime.ApplicationStarted.Register(async () =>
    {
        var registration = new AgentServiceRegistration
        {
            ID = serviceId,
            Name = serviceName,
            Address = serviceAddress,
            Port = servicePort,
            Check = new AgentServiceCheck
            {
                HTTP = $"https://{serviceAddress}:{servicePort}/Health",
                Interval = TimeSpan.FromSeconds(10),
                Timeout = TimeSpan.FromSeconds(5)
            }
        };
        await discovery.RegisterServiceAsync(registration);
    });
    
    lifetime.ApplicationStopping.Register(async () =>
    {
        await discovery.DeRegisterServiceAsync(serviceId);
    });

}

With these configurations, the Catalog API will register itself with the Consul upon startup and deregister upon shutdown.

Step 3: Creating the Client Application:

Next, create a console application named ServiceDiscoveryTutorials.ClientApp. Use the following command to create the project:

dotnet new console -n ServiceDiscoveryTutorials.ClientApp

Add the Consul client package to the project:

dotnet add package Consul

In the Program.cs file, configure the Consul client to discover services:

class Program
{
    static async Task Main(string[] args)
    {
        using (var client = new ConsulClient(consulConfig =>
        {
            consulConfig.Address = new Uri("http://localhost:8500");
        }))
        {
            var services = await client.Catalog.Service("CatalogApi");
            foreach (var service in services.Response)
            {
                Console.WriteLine($"Service ID: {service.ServiceID}, Address: {service.ServiceAddress}, Port: {service.ServicePort}");
            }
        }
        //var consulClient = new ConsulClient();
        //// Specify the service name to discover
        //string serviceName = "CatalogApi";
        //// Query Consul for healthy instances of the service
        //var services = (await consulClient.Health.Service(serviceName, tag: null, passingOnly: true)).Response;
        //// Iterate through the discovered services
        //foreach (var service in services)
        //{
        //    var serviceAddress = service.Service.Address;
        //    var servicePort = service.Service.Port;
        //    Console.WriteLine($"Found service at {serviceAddress}:{servicePort}");
        //    // You can now use the serviceAddress and servicePort to communicate with the discovered service.
        //}

    }
}

This code snippet retrieves all instances of the CatalogApi service registered with the Consul.

Step 3: Testing the API and Client Application:

Below is the project structure in the Visual Studio. 

Next, let’s run both applications using the command dotnet run. When this application starts, the Consul portal will display the registered service. 

Below is the final results of the application.

Conclusion:
In this tutorial, we’ve learned how to set up Consul for service discovery and register a .NET Core API with Consul. Additionally, we’ve developed a console application to discover services using Consul’s API. By leveraging Consul, you can enhance the scalability and reliability of your microservices architecture.

Source Code

Building Resilient Microservices: Implementing Resiliency Patterns with Polly Framework

Resiliency is critical to building distributed systems, especially in microservices architectures where failures are inevitable. In this comprehensive guide, we'll explore how to implement resiliency patterns using the Polly framework in .NET Core. We'll cover the retry, circuit breaker, and fallback patterns, each with detailed examples to help you understand their implementation and benefits.

Introduction to Polly Framework

Polly is a robust resilience and transient-fault-handling library for .NET designed to help developers quickly implement resiliency patterns. It provides a fluent interface for defining policies for retry, circuit breaker, and fallback strategies.

Retry Pattern

The retry pattern allows you to automatically retry an operation that has failed due to transient faults, such as network errors or temporary unavailability of resources. Let's dive into a step-by-step implementation of the retry pattern using Polly.

  1. Install Polly NuGet Package: First, install the Polly NuGet package in your .NET Core application.

    Install-Package Polly
     
  2. Create a Retry Policy: Use Polly's fluent syntax to define a retry policy. Specify the number of retry attempts and the duration between retries.

    var retryPolicy = Policy
        .Handle<Exception>()
        .WaitAndRetry(5, retryAttempt => TimeSpan.FromSeconds(5));
     
  3. Execute the Operation with Retry: Use the retry policy to execute the operation you want to retry.

    retryPolicy.Execute(() =>
    {
        // Perform the operation that may fail
        YourOperation();
    });
     
  4. Handle Exceptions: Polly will handle exceptions thrown by the operation and retry it according to the retry policy.

Circuit Breaker Pattern

The circuit breaker pattern is used to prevent repeated execution of an operation that is likely to fail, thereby reducing the load on the system. Let's see how to implement the circuit breaker pattern with Polly.

  1. Create a Circuit Breaker Policy: Define a circuit breaker policy specifying the number of consecutive failures before the circuit is opened and the duration of the open state.

    var circuitBreakerPolicy = Policy
        .Handle<Exception>()
        .CircuitBreaker(3, TimeSpan.FromSeconds(30));
     
  2. Execute the Operation with Circuit Breaker: Use the circuit breaker policy to execute the operation.

    circuitBreakerPolicy.Execute(() =>
    {
        // Perform the operation that may fail
        YourOperation();
    });
     
  3. Handle Circuit Breaker State: Polly will manage the circuit breaker state internally, transitioning between closed, open, and half-open states based on the defined thresholds.

Fallback Pattern

The fallback pattern provides an alternative behaviour or value when an operation fails. It helps gracefully handle failures by providing a fallback mechanism. Let's implement the fallback pattern using Polly.

  1. Define a Fallback Policy: Create a fallback policy specifying the fallback action to be executed when the primary operation fails.

    var fallbackPolicy = Policy
        .Handle<Exception>()
        .Fallback(() =>
        {
            // Perform fallback operation
            FallbackOperation();
        });
     
  2. Execute the Operation with Fallback: Use the fallback policy to execute the primary operation, with fallback behaviour defined.

    fallbackPolicy.Execute(() =>
    {
        // Perform the primary operation
        YourOperation();
    });
     
  3. Handle Fallback: Polly will execute the fallback action when the primary operation fails, ensuring graceful functionality degradation.

Conclusion

Implementing resiliency patterns like retry, circuit breaker, and fallback using the Polly framework can significantly enhance the reliability and robustness of your microservices architecture. By intelligently handling transient faults and failures, you can ensure that your application remains responsive and available under challenging conditions. You can just experiment with these patterns in your microservices projects to build more resilient, fault-tolerant systems.

Source Code

Securing Your Microservices: Azure B2C Authentication in ASP.NET Core API with Ocelot API Gateway

Introduction:
Microservices architecture offers flexibility and scalability but also presents challenges in managing authentication and authorization across multiple services. In this blog post, we will explore how to secure your microservices using Azure B2C authentication in ASP.NET Core API with Ocelot API Gateway. We’ll start by configuring Azure B2C for authentication and then integrate it with our ASP.NET Core API through Ocelot.

Prerequisites:

  1. Azure Subscription: You’ll need an Azure subscription to create and configure Azure B2C resources.
  2. Create one now if you haven’t already created your own Azure AD B2C Tenant. You can use an existing Azure AD B2C tenant.
  3. Visual Studio or Visual Studio Code: We’ll use Visual Studio or Visual Studio Code to create and run the ASP.NET Core API project.
  4. .NET Core SDK: Ensure that the .NET Core SDK is installed on your development machine.
  5. Azure CLI (Optional): Azure CLI provides a command-line interface for interacting with Azure resources. It’s optional but can help manage Azure resources.

Step 1: App registrations

  1. Sign in to the Azure portal (https://portal.azure.com) using your Azure account credentials.
  2. Navigate to the Azure Active Directory service and select App registrations.
  3. Click on “+ New registration” to create a new application registration.
  4. Provide a name for your application, select the appropriate account type, and specify the redirect URI for authentication callbacks.
  5. After creating the application registration, note down the Application (client) ID and Directory (tenant) ID. 

Step 2: Create a client secret

  1. Once the application is registered, note the Application (client) ID and Directory (tenant) ID.
  2. If you are not on the application management screen, go to the Azure AD B2C—App registrations page and select the application you created.
  3. To access the Certificates & secrets settings, navigate to the Manage option and select it. The Certificates & secrets option can be found in the left menu.
  4. Under “Certificates & secrets”, generate a new client secret by clicking on New client secret.
  5. Enter a description of the client’s secret in the Description box. For example, Ocelotsecret.
  6. Under Expires, select a duration for which the secret is valid, and then click Add.
  7. Copy the secret’s Value for use in your client application code and save it securely.

Step 3: Configure scopes

  1. In the Azure AD B2C - App registrations page, select the application you created if you are not on the application management screen.
  2. Select App registrations. Select the OcelotTutorials application to open its Overview page.
  3. Under Manage, select Expose an API.
  4. Next to the Application ID URI, select the Add link.
  5. I have not changed the default GUID with my API, but you can replace the default value (a GUID) with an API and then select Save. The full URI is shown and should be in the format https://your-tenant-name.onmicrosoft.com/api. When your web application requests an access token for the API, it should add this URI as the prefix for each scope you define for the API.
  6. Under Scopes defined by this API, select Add a scope.

  7. Enter the following values to create a scope that defines read access to the API, then select Add scope:

    Scope name: ocelottutorial.read
    Admin consent display name: Read access to API Gateway API
    Admin consent description: Allows read access to the API Gateway API

Step 4: Grant permissions

  1. Select App registrations and then the web application that should have access to the API, such as OcelotTutorials.
  2. Under Manage, select API permissions.
  3. Under Configured permissions, select Add a permission.
  4. Select the My APIs tab.
  5. Select the API to which the web application should be granted access. For example, webapi1.
  6. Under Permission, expand API Name, and then select the scopes that you defined earlier. For example, ocelottutorial.read and ocelottutorial.write.
  7. Select Add permissions.
  8. Select Grant admin consent for (your tenant name).
  9. If you’re prompted to select an account, select your currently signed-in administrator account, or sign in with an account in your Azure AD B2C tenant that’s been assigned at least the Cloud application administrator role.
  10. Select Yes. 
  11. Select Refresh, and then verify that “Granted for …” appears under Status for both scopes.

Step 5: Enable ID token implicit grant

If you register this app and configure it with https://jwt.ms/ app for testing a user flow or custom policy, you need to enable the implicit grant flow in the app registration:

  1. In the left menu, under Manage, select Authentication.
  2. Under Implicit grant and hybrid flows, select both the Access tokens (used for implicit flows) and ID tokens (used for implicit and hybrid flows) checkboxes.
  3. Select Save. 

Step 6: Set Up Azure B2C Authentication in ASP.NET Core API

  1. Create 3 new ASP.NET Core Web API projects in Visual Studio or Visual Studio Code.
    Accounting.API
    Inventory.API
    ApiGateway

  2. Assign the ports to the API. ApiGateay 9000, Accounting.API 9001, Inventory.API 9002
     {
       "Urls": "http://localhost:9001",
       "Logging": {
         "LogLevel": {
           "Default": "Information",
           "Microsoft.AspNetCore": "Warning"
         }
       },
       "AllowedHosts": "*"
     }
    
  3. Install the necessary NuGet packages for Azure B2C authentication. Install the below packages in the ApiGateway project

     dotnet add package Microsoft.Identity.Web
     dotnet add package Ocelot
    
  4. Configure Azure B2C authentication in your Startup.cs file:

     builder.Services.AddMicrosoftIdentityWebApiAuthentication(builder.Configuration);
    
  5. Add the Azure B2C settings to your appsettings.json file:

     {
       "Urls": "http://localhost:9000",
       "Logging": {
         "LogLevel": {
           "Default": "Information",
           "Microsoft.AspNetCore": "Warning"
         }
       },
       "AllowedHosts": "*",
       "AzureAd": {
         "Instance": "https://login.microsoftonline.com/",
         "Domain": "http://localhost:9000/",
         "TenantId": "",
         "ClientId": ""
       }
     }
    
  6. Ensure that the authentication middleware is added to the request processing pipeline in the Configure method of Startup.cs:

     app.UseAuthentication(); // Place UseAuthentication before UseOcelot
     app.UseAuthorization(); // Place UseAuthorization before UseAuthentication
    
  7. Add the ocelot.json file to the the ApiGateway with the below configuration
     {
       "Routes": [
         {
           "DownstreamPathTemplate": "/api/values",
           "DownstreamScheme": "http",
           "DownstreamHostAndPorts": [
             {
               "Host": "localhost",
               "Port": 9001
             }
           ],
           "UpstreamPathTemplate": "/accounting",
           "UpstreamHttpMethod": [ "GET" ],
           "AuthenticationOptions": {
             "AuthenticationProviderKey": "Bearer",
             "AllowedScopes": []
           }
         },
        
         {
           "DownstreamPathTemplate": "/api/values",
           "DownstreamScheme": "http",
           "DownstreamHostAndPorts": [
             {
               "Host": "localhost",
               "Port": 9002
             }
           ],
           "UpstreamPathTemplate": "/inventory",
           "UpstreamHttpMethod": [ "GET" ],
           "AuthenticationOptions": {
             "AuthenticationProviderKey": "Bearer",
             "AllowedScopes": []
           }
         }    
       ],
       "GlobalConfiguration": {
         "BaseUrl": "http://localhost:9000"
       }
     } 
    
  8. Added ocelot configuration to the services
    // Ocelot configuration
    builder.Configuration.AddJsonFile("ocelot.json", optional: false, reloadOnChange: true);
    builder.Services.AddOcelot(builder.Configuration);
    
  9. Added Ocelot to the middleware pipeline in the end.
    app.UseAuthentication(); // Place UseAuthentication before UseOcelot
    app.UseAuthorization(); // Place UseAuthorization before UseAuthentication
    app.MapControllers();
    app.UseOcelot().Wait();
    app.Run();
    

Step 7: Testing authentication

To Test this, refer to this tutorial OAuth 2.0 authorization code flow in Azure Active Directory B2C

  1. Replace the required fields and use the below URL in the browser to get the code to fetch the token. https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?client_id={client id}&response_type=code&response_mode=query&scope={scope uri}&state=007 

  2. Open Postman and use the returned code to generate the token. See the image below to check the URL and the required fields to get the token.
    https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token

  3. Now we are ready to call our API Gateway with the token. 

Conclusion:
In this blog post, we’ve covered the first part of securing your microservices architecture using Azure B2C authentication. We walked through the process of configuring Azure B2C for authentication, including creating a tenant, setting up user flows (policies), and integrating Azure B2C authentication into an ASP.NET Core API project. In the next part of this series, we’ll explore how to integrate Azure B2C authentication with Ocelot API Gateway for centralized authentication and authorization management across microservices.

References:
Tutorial: Register a web application in Azure Active Directory B2C
Add a web API application to your Azure Active Directory B2C tenant

Source Code

Leveraging RabbitMQ with C# and .NET: A Comprehensive Guide

In today's interconnected world, efficient data transfer between applications is crucial for smooth operations. Whether it's processing large volumes of requests or orchestrating tasks across distributed systems, having a reliable message broker is essential. RabbitMQ, an open-source message broker, provides a robust solution for building scalable and decoupled applications.

Introduction

This comprehensive guide will explore leveraging RabbitMQ with C# and .NET to build resilient and flexible messaging systems. From installation to advanced message routing techniques, we'll cover everything you need to know to get started with RabbitMQ.

Overview of RabbitMQ

RabbitMQ is a powerful message broker that facilitates communication between different components of an application. It implements the Advanced Message Queuing Protocol (AMQP), providing a standardized way for applications to exchange messages. With RabbitMQ, you can decouple your application components, making them more resilient to failures and more accessible to scale.

Why RabbitMQ?

  • Cross-platform Compatibility: RabbitMQ runs on multiple platforms, including Windows and Linux, making it suitable for various environments.
  • Language Agnostic: It supports multiple programming languages, allowing you to build applications in your language of choice.
  • Persistence Options: RabbitMQ offers in-memory and disk-based message storage options, giving you flexibility in managing message durability.
  • Scalability: With support for clustering and high availability, RabbitMQ can handle large volumes of messages and scale horizontally as your application grows.

Installation and Setup

Step 1: Installing Erlang Runtime and RabbitMQ Server

Before using RabbitMQ, we need to install the Erlang runtime and RabbitMQ server. Follow these steps to install RabbitMQ on your system:

  1. Download the latest Erlang runtime from erlang.org and install it on your machine.
  2. Download the latest RabbitMQ server release from rabbitmq.com and unzip the folder to a location on your hard drive. alt text
  3. Set the ERLANG_HOME environment variable to the Erlang installation directory. For example:
    setx ERLANG_HOME "C:\Program Files\erl10.6"
     
  4. Install RabbitMQ as a Windows service by running the following commands in a console:
    rabbitmq-service /install
    rabbitmq-service /enable
    rabbitmq-service /start
     

Step 2: Configuring RabbitMQ

After installing RabbitMQ, you can use the rabbitmqctl command-line tool to manage the server. Start by ensuring that the server is running:

rabbitmqctl status
 

You can secure your RabbitMQ instance by creating a new user with limited permissions and removing the default guest user:

rabbitmqctl add_user myuser mypassword
rabbitmqctl set_permissions myuser ".*" ".*" ".*"
rabbitmqctl delete_user guest

 

Accessing RabbitMQ 

 

RabbitMQ Enable Web Management Plugin

To enable a RabbitMQ web management plugin on Windows, we need to start the RabbitMQ Command Prompt with administrator privileges, enter the command “rabbitmq-plugins enable rabbitmq_management,” and execute it.

 

 

 After executing the above web management command, the web management plugins will be enabled, and the enabled list will be shown.

 

 

After starting the RabbitMQ Web Management Plugin, enter the following URL in your browser and click 'enter' to open the web management plugin.

http://localhost:15672

After opening the localhost URL in the browser, it will ask you for credentials to access the web management plugin.

To access the RabbitMQ web management dashboard, use the default username and password “guest” (Username: “guest” | Password: “guest”).

 

You will see an overview screen after logging in with the default credentials.

 

Working with RabbitMQ in .NET

 

Let's integrate RabbitMQ with our .NET applications using the RabbitMQ .NET client library and C#.

Setting up RabbitMQ Connection

To establish a connection to RabbitMQ from a .NET application, we'll need to configure a connection factory:

var connectionFactory = new ConnectionFactory
{
    HostName = "localhost",
    UserName = "myuser",
    Password = "mypassword"
};

using (var connection = connectionFactory.CreateConnection())
{
    // Create and configure channel
}
 

Working with Exchanges and Queues

RabbitMQ uses exchanges and queues to route messages between producers and consumers. Let's create an exchange and a queue and bind them together:

using (var model = connection.CreateModel())
{
    // Declare exchange
    model.ExchangeDeclare("MyExchange", ExchangeType.Fanout, true);
    
    // Declare queue
    model.QueueDeclare("MyQueue", true);
    
    // Bind queue to exchange
    model.QueueBind("MyQueue", "MyExchange", "", false, null);
}
 

Publishing and Consuming Messages

Now that we have our exchange and queue set up, let's publish a message to the exchange and consume it from the queue:

// Publish message
string message = "Hello, RabbitMQ!";
var body = Encoding.UTF8.GetBytes(message);
model.BasicPublish("MyExchange", "", null, body);

// Consume message
var consumer = new EventingBasicConsumer(model);
consumer.Received += (sender, args) =>
{
    var messageBody = Encoding.UTF8.GetString(args.Body.ToArray());
    Console.WriteLine($"Received message: {messageBody}");
};
model.BasicConsume("MyQueue", true, consumer);
 

Performance Considerations

RabbitMQ offers impressive performance, even under heavy loads. By optimizing message delivery and consumption, you can achieve high throughput and low latency. You can experiment with different configurations and message persistence options to find the best setup for your use case.

Conclusion

In this guide, we've explored the fundamentals of RabbitMQ and demonstrated how to integrate it with C# and .NET applications. By leveraging RabbitMQ's powerful features, you can build robust and scalable messaging systems that meet the needs of your application. You can experiment with different exchange types, message routing strategies, and deployment configurations to unlock the full potential of RabbitMQ in your projects.