AI in DevOps

Artificial Intelligence in DevOps

Artificial Intelligence is currently one of the most talked-about subjects, garnering more attention than ever before in its history since the term was initially coined in 1940. Before delving into the intersection of AI and DevOps, let’s have a quick recap of a few key terms to establish the context.

Highlights:

  • Key Terms Related to AI & DevOps
  • Changing Paradigm
  • How is Artificial Intelligence changing DevOps?
  • Challenges for Artificial Intelligence in DevOps

Key Terms Related to AI & DevOps

Artificial intelligence (AI)

Artificial intelligence (AI) is a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand, and translate spoken and written language, analyze data, make recommendations, and more.

What is Generative AI?

Generative artificial intelligence or generative AI (also GenAI) is a type of artificial intelligence (AI) system capable of generating text, images, or other media in response to prompts.

What is DevOps?

DevOps is a methodology in the software development and IT industry. Used as a set of practices and tools, DevOps integrates and automates the work of software development (Dev) and IT operations (Ops) as a means of improving and shortening the system’s development life cycle. DevOps is complementary to agile software development; several DevOps aspects came from the agile way of working.

Artificial Intelligence for IT Operations (AIOps)

Artificial Intelligence for IT Operations (AIOps) is a term coined by Gartner in 2016 as an industry category for machine learning analytics technology that enhances IT operations analytics. AIOps is the acronym for “Artificial Intelligence Operations.” Such operational tasks include automation, performance monitoring, and event correlations, among others.

Changing Paradigm

The term “Artificial Intelligence” was first coined by the father of AI, John McCarthy, in 1956. However, the revolution of AI actually began a few years earlier, in the 1940s. (Read more)

Modern AI is capable to See (using computer vision technologies), Hear (thanks to significant advancements in speech recognition technologies), Comprehend (effectively with the help of sophisticated algorithms), Sense (more accurately due to the democratization of big data and trained machine learning models), and finally, Act (faster and more precisely than ever before in history, fueled by rapid innovations in personal computing, internet, and cloud-native technologies).

Calling it “the third run-time,” Microsoft CEO Satya Nadella said that Artificial Intelligence is the “ultimate breakthrough” technology. He stated, “The operating system was the first run-time. The second run-time you could say was the browser. The third run-time can actually be the agent because, in some sense, the agent knows you, your work context and knows the work; and that’s how we’re building Cortana, and we are giving it a really natural language understanding.”

In simple terms, AI is the ability of machines to use algorithms to learn from data, utilize the knowledge gained to make decisions similar to humans, and take actions based on achieved intelligence or defined instructions.

How is Artificial Intelligence changing DevOps?

Today, almost every aspect of our lives is directly or indirectly impacted by AI. From small businesses, startups, mid-sized companies to large enterprises and governments, AI is being actively discussed. However, with the increased attention on AI, there is also some amount of hype leading to certain negative perceptions about its impact on the human workforce and jobs. In recent years, the classic debates about computers and machines taking over human jobs have shifted towards discussions about AI and robots taking over jobs.

DevOps, like every other aspect of business and personal lives, is also undergoing a transformation as AI takes a more central role in conversations. It is important to understand that with generative artificial intelligence, some of the key tasks and activities performed in DevOps can undergo changes.

1. Infrastructure Provisioning and Management

  • Estimating infrastructure capacity, sizing, and autoscaling can be improved as AI can perform better and faster analysis of historical usage data to take appropriate actions.
  • The integration of chatbots is a viable possibility to enhance management by providing smart prompts through desktop, browser, and mobile apps. For example, users can ask questions like “What percentage of CPU utilization?” or “How many active nodes are there in the load balancer?”, etc.

2. Security and Compliance

  • Better anomaly detection and compliance reporting for endpoints can be achieved with AI. It enables quick detection of anomalous activities, reclassification of endpoints, and faster enforcement of compliance remediation compared to human intervention.
  • AI-enabled agents can continuously monitor large amounts of network traffic data, proactively detect potential malware attacks, and be programmed to stop the attack or significantly reduce the attack surface.

3. Moving from Monitoring to Observability

  • Predicting application performance degradation and avoiding potential downtime often requires significant human effort in monitoring and analyzing logs proactively and reactively. However, AI can be utilized to predict potential downtime more accurately. With self-healing automation, AI can proactively implement remediation steps to mitigate issues.
  • By using more intelligent alerts, AI can effectively predict performance degradation, reducing alert fatigue on monitoring teams. It can also provide feedback to developers to improve logs and system performance.
  • As logging and alerts become more precise, AI-driven systems can assist with initial triages, allowing teams to focus on running more reliable and stable applications for the business.

4. Automated Testing and Quality Assurance

  • Hyper-automation optimizes business processes by integrating various automation tools and technologies. It goes beyond traditional test automation, incorporating AI and ML capabilities for intelligent automation of complex testing scenarios.
  • This integration offers benefits such as intelligent test case generation, adaptive test execution, improved test coverage, and predictive analysis for early defect detection.
  • AI and ML algorithms provide valuable insights from testing data, enabling organizations to optimize test strategies and make informed decisions.

5. Knowledge Sharing, Documentation, and Collaboration

  • Generative AI systems, capable of generating text, images, or other media in response to prompts, can effectively capture knowledge and facilitate documentation. For example, consider the article -‘Move resources to a new subscription or resource group – Azure Resource Manager | Microsoft Learn,’ which was partially created with the help of artificial intelligence.

AI generated article

  • By leveraging chatbots and virtual assistants, knowledge transfer can become more engaging and result-oriented, particularly in situations where time is a critical constraint for smooth handover between engineers.
  • AI’s real-time data delivery and analysis capabilities can augment the automation of repetitive and mundane tasks, improving collaboration across cross-functional teams.
  • Furthermore, AI advancements will influence existing enterprise search tools and technologies, which predominantly rely on text input and indexing models.

AI flow chart in DevOps

Challenges for Artificial Intelligence in DevOps

Considering the buzz around AI, everyone wants to quickly implement it into their systems and business processes. However, as with any new technology, there will be some uncertainty regarding its optimal utilization during the initial attempts. This uncertainty often leads to an exploratory approach, requiring multiple attempts to get it working right.

Based on our DevOps experience with numerous clients over the years, we have identified the following as the generic challenges in implementing AI in DevOps effectively.

1. Quantity and Quality of Enterprise Data

Meticulous planning is required to ensure sufficient and high-quality data availability to feed into machine learning models before further utilization.

2. Hiring and Training Skilled Workforce

Challenges in recruiting and training a skilled workforce to handle AI projects can impact the integration of AI into existing DevOps tools and systems.

3. Managing AI Models

Sustaining and managing AI models in the long run, ensuring they run without biases, misinformation, and copyright violations, necessitates specialized skills and a dedicated workforce.

4. Ethical Considerations

Addressing ethical considerations and ensuring responsible AI implementation goes beyond technical aspects, requiring close collaboration among stakeholders to ensure a responsible and ethical approach.

Final Thoughts

Over the next 10 years, AI will maintain its dominant presence in all fields of work and serve as a significant driving force behind every business idea, directly or indirectly. DevOps has already embarked on a transformational journey and will continue to witness rapid changes in the way IT and Software Services companies of all sizes embrace and innovate with it, ultimately adding value to their core business and customers.

Hyper Automation_AI & ML

Hyper Automation – Take Test Automation to Next Level with AI and ML

Hyper automation is a term used to describe the integration of multiple technologies, including Artificial Intelligence (AI), Machine Learning (ML), Robotic Process Automation (RPA), and other tools, to automate and streamline business processes. Hyper automation enables organizations to automate a wide range of tasks, from routine, repetitive tasks to complex, data-driven processes, resulting in increased efficiency, improved productivity, and reduced costs.

This blog will cover AI-powered software testing and automation, including the benefits and challenges of hyper-automation. We’ll also provide best practices for implementing a hyper-automation strategy in testing and automation.

Highlights:

  • How AI is reshaping Automation Testing?
  • Automation Testing using OCR Powered by AI
  • Automation Testing using ChatGPT
  • Self-healing Automation Tests powered by AI
  • Image-based visual testing powered by AI
  • Benefits of Hyper Automation
  • Challenges of Hyper Automation

How AI is Reshaping Automation Testing?

In recent years, automation testing has advanced significantly from functional testing to using various tools and frameworks. However, to keep up with the changing software testing landscape, new and innovative testing methodologies have emerged, including AI- and ML-based testing tools.

Today, companies aim to use AI and ML algorithms to develop more effective test automation tools. The advantages of AI-based automation testing are numerous, including faster and continuous testing, increased automation, and faster ROI. By incorporating AI and ML into automated software testing processes, test scripts become more efficient, reliable, and effective.

However, traditional automation testing methods still present significant challenges to businesses. Fortunately, AI- and ML-based automation tools can help overcome these challenges and enhance the overall testing process.

Automation Testing using OCR Powered by AI

OCR is revolutionizing how quality assurance teams conduct front-end software testing in the software development industry. GUI tests can become unreliable when developers update an application’s underlying code because on-screen elements can change, making it difficult to locate objects using existing tools. This poses a challenge for teams to scale their testing efforts, as tests may fail if UI elements cannot be found.

However, automated testing tools that incorporate OCR can now recognize previously unidentified objects, allowing testers to develop stronger and more scalable UI tests quickly. This development has significant implications for automated UI testing, as OCR enhances the stability and reliability of automation scripts in unexpected ways, without the need for manual object rules and object repository maintenance.

Moreover, AI has further improved OCR’s capabilities, providing even greater potential for improving software quality. This article will explore the impact of OCR on automated UI testing, the potential of AI for OCR, and the tools available to leverage OCR for better software quality.

Some of the applications of OCR in test automation include:

1. Reading PDFs and Scanned PDFs

Using write-protected PDF documents is a popular method for creating and sharing documents, but it can be difficult for test automation tools to read or process them and compare them to expected results. If the PDF document is scanned, it’s usually impossible to compare it to anything. As a result, testers often manually verify that the file name matches the expected result, which is both time-consuming and prone to errors. However, OCR with AI can overcome these limitations by extracting text from scanned PDFs, even if they are imperfect or scanned at an angle. This text can be compared with text from a database, other documents, screenshots, or data stored in the test software itself, which can significantly improve the efficiency and accuracy of testing efforts by automating the comparison process.

2. Applications Similar to Terminal Server

In Microsoft Terminal Server, the server sends a bitmap to the computer screen, making the client a “dumb” one that sends mouse clicks, screen coordinates, and type commands to the server, much like the old mainframe green screens. As a result, Terminal Server applications usually have only one large window and do not have unique IDs for controls like buttons. Nevertheless, these applications still contain buttons, menus, and other user interface elements. By locating the text on the button or element, users can determine its center and click it. This method also applies to menus and other user interface elements.

3. Unpredictable Identifiers

Testers may face challenges when an element ID is unpredictable, such as when a logistics company employs a smart-coded ID that corresponds to the order number of a new order. However, OCR and AI can be utilized to read the page as a string and locate the radio button adjacent to the first group of numbers, allowing for it to be selected.

4. Missing Identifiers

Locating objects during testing on older platforms without unique identifiers can be difficult. An example of this is a Java Swing application that generates a dropdown list dynamically without assigning an identifier. However, OCR and AI can be utilized in these situations to identify the selected text, reposition the dropdown, and select a particular option.

5. Text That Won’t Change But Might Move

The existing test tools used in several applications depend on the parent container object or object ID to locate the text. However, if there is a change in the parent object or ID, it may become difficult to find the text. By using OCR and AI, testers can detect the object and verify if the text is still the same, regardless of changes in the parent object or ID.

Automation Testing using ChatGPT

ChatGPT, a language model based on GPT-3.5 architecture, has gained immense popularity due to its conversational style of interaction. It can handle queries, follow-up questions, and even acknowledge its mistakes, making it a useful tool for various applications. One potential area where ChatGPT can make an impact is software testing. It has the ability to generate code in response to natural language requests in multiple languages and frameworks, including Selenium code in different programming languages, which can be modified accordingly.

Some of the possible applications include:

  • Creating automated test cases for any scenario
  • Building a complex test automation pipeline using CI/CD
  • Testing an application that utilizes multiple microservices
  • Providing clear instructions and straightforward code for using the generated code

ChatGPT, as a low-code tool, has the ability to generate code using the Cucumber testing framework, which follows a behavior-driven development (BDD) approach. Test scenarios can be expressed in plain English using keywords like “given,” “when,” and “then” with the Cucumber framework.

Here are some reasons why ChatGPT is an effective low-code testing tool for creating use cases:

  • It supports natural language, making it easier for users to write in plain language.
  • It supports multiple programming languages, libraries, and frameworks such as Cucumber.
  • It uses the Page Object model to update code when the element locator or application structure changes.
  • It separates the code from the test cases, making the test script more maintainable.

Overall, integrating ChatGPT with AI-based test automation is seen as a promising development for the future of software testing.

Self-Healing Tests Powered by AI

Self-healing is a solution in test automation that addresses pain points such as maintenance of automation test scripts, where scripts break due to changes made to object properties such as Name, ID, XPath, CSS, etc. Identifying and updating object changes in the object repository manually can be time-consuming, increasing overall testing efforts.

Implementing a self-healing solution can help project teams adopt a Shift-Left approach in the Agile Methodology, leading to a more efficient testing process with increased productivity and faster delivery. There are various commercial and open-source tools available that use machine learning and other AI-driven algorithms to automatically detect and fix changes without any human intervention.

The self-healing technique involves an end-to-end workflow where an AI engine detects any change in an object property that may cause test failures. It then extracts the entire document object model (DOM) and the object properties, using dynamic location strategies based on a weighted scoring system developed using machine learning algorithms to extract the updated property. These new object properties are used to perform automated actions on the object during runtime, ensuring seamless test script execution without the user being aware of any changes.

Image-Based Visual Testing Powered by AI

The rise of digital media has highlighted the significance of seamless user experiences for businesses across their digital channels. This article explores the potential of AI and computer vision to enhance the accuracy, scalability, and value of traditional visual regression testing methods.

We will explore an AI-powered visual test automation framework and its significance for web development and testing teams as they adopt this innovative approach to testing. Visual regression testing, also known as user interface (UI) testing, involves verifying the aesthetic accuracy of everything that end-users encounter and interact with after code changes are made to a website. This type of testing differs from functional testing, which ensures that the application’s features and functions work correctly.

Visual regression tests aim to detect visual “bugs” that functional testing tools may not uncover, such as misaligned buttons, overlapping images or text, partially visible elements, and issues with responsive layout and rendering.

Before diving into the details of visual AI, let us examine a few commonly used visual testing methods.

1. Pixel or Snapshot Comparison

One commonly used visual testing method involves taking a screenshot of a webpage and comparing it to a previous version of the page to detect any visual changes. However, this pixel-based approach has several challenges. It can produce false positives and identify small changes that are not noticeable to the human eye, such as alterations in font anti-aliasing or image scaling due to different rendering processes by the browser or graphics card. Moreover, it cannot handle dynamic content that changes frequently, such as a blinking cursor or a page with frequently updating content. Additionally, when there are differences in subsequent pixels, they may obscure actual issues further down the page.

2. Document Object Model (DOM) Comparison

The Document Object Model (DOM) is a programming interface that represents web documents as nodes and objects, allowing programs to modify the document’s structure, style, and content. Although the DOM approach may seem like a straightforward solution for visual test automation, there are some limitations to consider. Firstly, the DOM includes both rendered and non-rendered content, which means that a simple page restructure can result in false positives being identified as differences by the DOM comparison. Secondly, the DOM comparison cannot detect rendering changes. For instance, if a new image file is uploaded to a page using the same old file name, it will go unnoticed by the DOM comparison, even if the user sees a difference on the rendered page.

Therefore, relying solely on DOM comparisons is insufficient to ensure visual integrity.

How Visual AI Works?

Visual AI is an approach that overcomes the challenges of pixel and DOM methods. It identifies the visual elements that make up a screen or page and uses computer vision to recognize these elements and their associated properties, including dimensions, color, content, and placement, much like the human eye. Instead of examining individual pixels, Visual AI compares the properties of a checkpoint element with a baseline to detect any visible differences.

Image-Based Visual Testing_ AI

Automated visual testing now employs computer vision algorithms, similar to those used for facial recognition, for object recognition, and visual comparison. These tools, known as visual AI testing tools, have a learning algorithm that interprets the intended display of visual elements and compares them to the rendered page’s actual visual elements and their locations. Unlike pixel-based tools, visual AI testing tools capture page snapshots during functional tests and use algorithms to detect errors.

One advantage of visual AI testing tools is that they do not require static environments to function accurately. They have demonstrated high levels of precision with dynamic content because they base the comparison on relationships between elements, rather than individual pixels. As a result, AI-powered testing tools can handle a wider range of issues than snapshot testing tools.

Below is a comparison of the kinds of issues that AI-enabled visual testing tools can handle compared to snapshot testing tools:

Snapshot testing Vs Visual AI testing

Visual testing tools powered by AI have become a necessary tool for validating applications or websites that frequently change in content and format. For instance, media companies update their content up to twice per hour and use AI-powered automated testing to detect and fix errors that may affect paying customers. Furthermore, AI-powered visual testing tools play a crucial role in the testing process for any application or website undergoing brand revision or merger. Due to their high accuracy and low error rate, these tools enable companies to detect and resolve issues related to significant changes in DOM, CSS, and JavaScript that are critical to such updates.

Benefits of Hyper Automation

1. Increased Efficiency

Hyper automation allows businesses to automate repetitive, mundane tasks, freeing up employees to focus on more strategic and value-added work.

2. Improved Productivity

By automating time-consuming processes, hyper-automation helps to reduce errors and improve the speed and accuracy of business processes, resulting in increased productivity.

3. Reduced Costs

Hyper automation can help to reduce labor costs by automating tasks that were previously done manually. Additionally, it can help to reduce the cost of errors and rework, resulting in cost savings for the organization.

4. Better Customer Experience

By streamlining and automating business processes, hyper-automation can improve the speed and accuracy of customer interactions, resulting in a better customer experience.

Challenges of Hyper Automation

1. Complexity

Hyper automation involves the integration of multiple technologies, which can be complex and challenging to implement and manage.

2. Resistance to Change

Employees may be resistant to change, particularly if they perceive that automation threatens their jobs.

3. Data Quality

Hyper automation relies on accurate and high-quality data, so organizations need to ensure that their data is clean, up-to-date, and easily accessible.

Final Thoughts

To sum up, hyper-automation can bring substantial advantages to organizations seeking to enhance their productivity, efficiency, and customer satisfaction through test automation. Nonetheless, it is crucial to adopt a strategic approach to hyper-automation, starting with small-scale implementations and prioritizing high-impact processes while ensuring data accuracy and involving employees. With meticulous planning and execution, hyper-automation in test automation can generate significant rewards for organizations.

Retrospective Dashboard Queries_Splunk

Retrospective Dashboard Queries in Splunk

Splunk is widely used by organizations to monitor and troubleshoot IT infrastructure and applications. It is employed in many industries, such as healthcare, finance, and retail, to gain insights into their operations, security, and compliance and make data-driven decisions.

In this post, let’s see how to create Retrospective dashboard queries in Splunk with a simple scenario with a sample data.

Highlights:

  • What Is Splunk? 
  • PrimeSoft’s Expertise on Splunk
  • Retrospective Data Analysis Demo

What Is Splunk?

Splunk is a software designed to collect, analyze, and visualize large amounts of machine-generated data, such as log files, network traffic data, and sensor data. It offers a wide range of capabilities for searching, analyzing, and visualizing data, as well as building and deploying custom applications.

The software can be deployed on-premises or in the cloud, and offers a wide range of APIs and integrations with other systems, enabling users to collect data from various sources easily. It indexes and correlates information in a container, making it searchable, and enables the generation of alerts, reports, and visualizations.

Additionally, Splunk has a large and growing ecosystem of add-ons and integrations with other tools, making it a popular choice for organizations that need a flexible and scalable data analysis solution.

PrimeSoft’s Expertise on Splunk

PrimeSoft has good expertise on Splunk as we have helped our customers monitor and troubleshoot alerts received from multiple systems in both Production and Non-Production environments for business-critical applications.

Our experts have helped customers analyze, set up, rationalize, and perfect the alerts for maximizing the coverage of applications and infrastructure monitoring with effective alerts put into the right place. They have also been instrumental in creating various monitoring and reporting dashboards in Splunk, helping key customer stakeholders by offering critical business insights in a dashboard. Based on the lessons learned, our expert is sharing how to create retrospective dashboard queries in Splunk.

Retrospective Data Analysis Demo

To draw insights and make informed decisions, one must retrospectively look at historical data to uncover trends, patterns, and relationships.

Sample Data

In this demo, let’s use sample data with Users’ Login count for every hour throughout the year 2022. We will start with uploading the sample data in a CSV file to the Splunk Cloud (trial version). You can use the Splunk free trials available by following the process here.

The data contains only two columns, DateTime and UserLogins, as shown below.

Sample Data for Splunk Cloud

Figure 1

Uploading the Sample Data

To upload data, navigate to the Searching and Reporting view in Splunk Cloud, click on Settings to see the Add Data option, and follow the process.

Add Data_Splunk Cloud Settings

Figure 2

Building Queries and Visualizing Data

Let’s build queries to help us visualize data trends in the following scenarios.

  • Scenario 1 – Weekday & Weekly Trend Comparison (when log timestamp and _time field are same)
  • Scenario 2 – Monthly Trend Comparison (when log timestamp and _time field are NOT same)
  • Scenario 3 – Monthly Trend Comparison (when log timestamp and _time are in matching but need to be reported in different time zone)
Scenario 1

Let us assume that the log timestamp values are perfect and match with the default “_time” field in the Splunk index. We can use just two commands, timechart, and timewarp, to achieve retrospective data comparison.

The Query for Weekday Data Comparison
source=”SampleDataforSplunkBlog.csv” host=”si-i-0494c4ce0352be1f5.prd-p-w97tj.splunkcloud.com” sourcetype=”csv” | timechart values(UserLogins) span=1h | timewrap  w | where strftime(_time, “%A”) == “Monday”
  • The first line of the query fetches the data.
  • The second line time chart command creates data points based on the UserLogins field for every hour (Span=1h) and the timewrap command wraps the data points created earlier by week (w). 
  • Finally, in line three, we filter based on the weekday we wish to compare the metrics. 
  • Additionally, we confined the search period to only 3 weeks. We will see the same weekday data from previous weeks if we increase it.

Query for Weekday Data Comparison_Splunk

Figure 3.1

Query for Weekday Data Comparison_Splunk

Figure 3.2

Similarly, we can build a query for Weekly User Login data comparison.

The Query for Weekly Data Comparison
source=”SampleDataforSplunkBlog.csv”
host=”si-i-0494c4ce0352be1f5.prd-p-w97tj.splunkcloud.com” sourcetype=”csv” 
| timechart values(UserLogins) span=1h | timewrap  w
  • The first line of the query fetches the data.
  • In the second line, the timechart command creates data points based on the UserLogins field for every hour (Span=1h) and the timewrap command wraps the data points created earlier by week (w).
  • We confined the search period to only 3 weeks and if we increase it, we will observe data from previous weeks for comparison.

Query for Weekday Data Comparison_Splunk

Figure 4.1

Query for Weekly Data Comparison_Splunk

Figure 4.2

By updating the Span to 1 day and adding the User log-in values by replacing values() with the sum() function, we can generate aggregated data points per day to compare over 3 or more weeks based on the search period.

source=”SampleDataforSplunkBlog.csv”
host=”si-i-0494c4ce0352be1f5.prd-p-w97tj.splunkcloud.com” sourcetype=”csv” 
| timechart sum(UserLogins) span=1day | timewrap  w

Query for Weekly Aggregated Data Comparison_Splunk

Figure 5.1

Query for Weekly Aggregated Data Comparison_Splunk

Figure 5.2

Read more about timechart and timewarp commands in the Splunk documentation through the hyperlinks. These commands are highly customizable through inputs, which can help us to build many versions of Retrospective Metrics.

Scenario 2

Let’s assume that the log timestamp values are NOT matching with the default “_time” field in the Splunk index. In this case, we will have to use additional commands such as eval, chart, etc.

The Query for Monthly Data Comparison
source=”SampleDataforSplunkBlog.csv”
host=”si-i-0494c4ce0352be1f5.prd-p-w97tj.splunkcloud.com” sourcetype=”csv”   
| eval month = strftime(strptime(DateTime, “%d-%m-%Y %H:%M” ),”%B”),
dayofmonth = strftime(strptime(DateTime, “%d-%m-%Y %H:%M” ),”%d”)
|chart sum(UserLogins) as userloginsfortheday over dayofmonth by month limit=0
  • The first line of the query fetches the data.
  • In the second line, we are using the strftime and strptime Data-Time functions from Splunk to calculate the Day of the Month and Month fields. 
  • Finally, in line three, we use the chart command to calculate the Sum of User Logins per day and chart it over the day of the month for each month to compare. 
  • Additionally, we confined the search period to only 3 months, and by increasing it, we will be able to observe daily data from prior months. 

Query for Monthly Data Comparison_Splunk

Figure 6.1

Query for Monthly Data Comparison_Splunk

Figure 6.2

Scenario 3

Let’s assume that the log timestamp and _time field in the Splunk index match, but need to be reported in the preferred time zone.  Timestamp in this example is in the GMT zone and the query will help report it in the user’s preferred time zone in the user settings.

The Query for Monthly Data Comparison
source=”SampleDataforSplunkBlog.csv”
host=”si-i-0494c4ce0352be1f5.prd-p-w97tj.splunkcloud.com” sourcetype=”csv” 
| eval dtime=strftime(_time, “%d-%m-%Y %H:%M”)  
| eval DTimeZone=dtime+” GMT” 
| eval DTime=strftime(strptime(DTimeZone,”%d-%m-%Y %H:%M %Z”),”%d-%m-%Y %H:%M %Z”) 
| eval month = strftime(strptime(DTime, “%d-%m-%Y %H:%M %Z” ),”%B”),
dayofmonth = strftime(strptime(DTime, “%d-%m-%Y %H:%M %Z” ),”%d”) 
|chart sum(UserLogins) as userloginsfortheday over dayofmonth by month limit=0
  • The first line of the query fetches the data.
  • In line two, we are using strftime to convert from UNIX to the general Date-Time format. 
  • In line three, we are adding the Time zone as GMT, assuming the logs are in GMT.
  • In line four, we are using strftime and strptime to convert the Date-Time from GMT to the current user’s time zone setting.
  • In line five, we are calculating the Month and Day of the Month.
  • Finally, in line six, we use the chart command to calculate the Sum of User Logins per day and chart it over the day of the month by each month in order to compare them. 
  • Additionally, we confined the search period to only 3 months, and by increasing it, we will be able to see daily data from previous months. 

Query for Monthly Data In Preferred Timezone_Splunk

Figure 7.1

Query for Monthly Data In Preferred Timezone_Splunk

Figure 7.2

Thank you for reading. We hope you learned something new that will help you build retrospective queries to analyze your data patterns.

 

Durable Task Framework

What Is Durable Task Framework?

The Durable Task Framework (DTFx) allows users to write long running, persistent workflows in C# using the .Net framework and simple async/await coding constructs.

These workflows are also called orchestrations and require at least one provider (a backend persistence store ) for storing the orchestration message and runtime state. It provides a low-cost alternative for building robust workflow systems. The open-sourced Microsoft library is accessible here.

In this post, let’s understand how the Durable Task Framework works.

Highlights:

  • What Problem Can a Durable Task Framework Solve?
  • Key Features Of The The Framework
  • Why Is It Important In Cloud-Native Application Development?
  • How Does Azure Durable Task Work?
  • Basic Sample Solution To Demonstrate Durable Task Framework In Use
  • Key Takeaways

What Problem Can a Durable Task Framework Solve?

The framework is designed to allow users to build workflows using code. If processing can take a long time or one of the business requirements is to prepare a state machine or use a workflow engine, the Durable Task is an ideal option. It can also be a link between many microservices or act as a distributed component.

Key Features Of The Framework

  • The solution is lightweight and simple.
  • Workflows are defined in code, which makes them customizable.
  • The DTFx uses the Event Sourcing approach to store all actions in Azure Table Storage. Thanks to this feature, we can rerun the processes from the past or re-initiate the process from a state where the activity was interrupted.
  • The persistent state feature of DTFx allows for managing long running complex processes and helps to reduce the time duration.
  • Versioning of orchestrations and activities helps track changes.
  • DTFx supports an extensible set of backend persistence stores such as Service Bus, Azure Storage, SQL Service, Service Fabric, etc., and requires minimal knowledge of the services.

Why Is It Important In Cloud-Native Application Development?

Cloud Native Applications promote a loosely coupled, microservices-based application architecture. The Durable Task Framework enables the building of such applications, where most of the work in building a complex workflow system is done by the DTFx. Today, all leading cloud solution providers offer a Durable Task Framework, making it a cost-effective alternative for building high-performance workflow systems.

  • Durable Functions and Azure Functions are good examples of how the Durable Task Framework is being extended to offer cloud-native solutions on Azure.
  • AWS SWF (Simple Workflow Framework), a web service that facilitates coordination of work across distributed application components, is another good example of using the Durable Task Framework to coordinate tasks, manage execution dependencies, schedule and handle concurrency in accordance with the application’s logical flow
  • Google Cloud Tasks is a fully managed service that lets you manage the execution, dispatch, and delivery of a large number of distributed tasks.

How Does Azure Durable Task Work?

The Durable Task Framework consists of several components that work together to manage and execute orchestrations and activities. Let us take a look at the different components and how this is accomplished.

Task Hub: It is a logical container for Service Bus entities used by the Task Hub Worker to reliably pass messages between code orchestrations and the activities they orchestrate.

Task Activities: These are pieces of code that perform specific steps of the orchestration. A Task Activity can be ‘scheduled’ from within some Task Orchestration code. This scheduling results in a plain vanilla .NET Task that can be serviced (asynchronously) and that can be composed with other similar Tasks to build complex orchestrations.

Task Orchestration: This is where you can schedule Task Activities and build code orchestrations around the Tasks that represent the activities.

Task Hub Worker: This hosts Task Orchestrations and Activities. It also contains APIs for performing CRUD operations on the Task Hub itself.

Task Hub Client

The Task Hub Client provides:

  • APIs to create and manage task orchestration instances
  • APIs to query the state of Task Orchestration instances from an Azure Table

The Task Hub Worker and Task Hub Client are connected to the Service Bus and Azure Table Storage via connection strings. The Service Bus is used for storing the execution control flow state and passing messages between Task Orchestration instances and Task activities.

Since the Service Bus is not meant to be a database, the state is removed from the Service Bus once the code orchestration is complete. However, if an Azure Table storage account is linked, this state is available for queries as long as the user stores it.

The framework provides Task Orchestration and Task Activity base classes from which we can derive to specify orchestrations and activities. Then use the Task Hub APIs to load these orchestrations and activities into the process and then start the worker that begins processing requests to create new orchestration instances.

The Task Hub Client APIs are used to create new orchestration instances, query existing instances, and terminate those instances as needed.

We start by creating a new orchestration instance that loads all activities into the Service Bus as a control flow, based on the orchestration definition. All activities are then executed one by one. This process ensures that all actions are invoked only once. If something goes wrong with the application and the problem is resolved, the framework resumes from the exact activity it was executing at the time of the crash.

Basic Sample Of Durable Task Framework Implementation

This solution aims to demonstrate DTFx using a simple scenario of a Customer Sign up. The Task Orchestration is the complete signup process and the Task Activities are different steps performed to achieve this.

We have 4 Task Activities: Address Check (PerformAddressCheckAsync), Credit Check (PerformCreditCheckAsync), Bank Account Check (PerformBankAccountCheckAsync), and Sign Up Customer. 

The three checks are executed in parallel, and once they are completed, the Signup Customer Task is invoked. This returns the ID only if all checks were successful, or rejects it with a ‘REJECTED’ message if one or more checks fail.

The Credit Check Task is a data-driven asynchronous process. It relies on the ‘NumberOfCreditAgencies’ value passed as input to the workflow.

Task Orchestration Instance

In Figure 1, we see that 3 activities are Async Methods that return the Task result of type Boolean. Each of these methods calls the corresponding sub-methods in Figure 2. This performs the validation of the Address, Bank Account, and Check Credit Score. In addition, we have the 4th activity, Signup Customer, which generates and returns the Customer ID.

Async Task Activities

Fig. 1

Task Activities_Durable Task Framework

Fig. 2

Figure 3 shows the implementation of Task Orchestration. Thus, each created orchestration instance executes all declared activities. Based on the result of the checks, the last activity returns the result as ‘User Signed up ID’ or ‘Rejected’.

Task Orchestration

Fig. 3

Key Takeaways

  • With the Durable Task Framework, Microsoft provides a simple option for building robust, distributed, and scalable services. DTFx has built-in state persistence and program execution checkpoints.
  • By implementing The Durable Task Framework, you can manage all system logic from one place and define the process flow.
  • The Durable Task Framework uses Event Sourcing to store all actions in Azure Table Storage. You can resume the processes from the past or continue the process from the point where the action was interrupted.
Microsoft Azure-Using External Data Into KQL

How To Use External Data Into KQL?

In many businesses, Azure is becoming the infrastructure backbone. It has become imperative to be able to query Azure using KQL, to gain insights into the Azure services your organization utilizes. In this post, let’s understand how to explore logs in Azure data storage using an external data file into KQL.

Highlights:

  • What Is Kusto Query Language (KQL)?
  • Sample Use Cases For Demo
  • Prerequisites
  • Simple Method For Basic Use Case
  • Alternative Method For Enhanced Use Case
  • Key Takeaways

 

What Is Kusto Query Language?

KQL, which stands for Kusto Query Language, is a powerful tool to explore your data and discover patterns, identify anomalies and outliers, create statistical models, and more. The language is used to query the Azure Monitor Logs, Azure Application Insights, Azure Resource Explorer, and others.

Sample Use Cases For Demo

1. Basic use case (solved using a simple method)

  • Imagine you have a set of servers and applications hosted in Azure.
  • You have configured logs & metrics collection using Azure monitoring services.
  • You must query the logs to find out applications that are hitting high processor utilization.

2. Enhanced use case (solved using an alternative method)

  • You must query the logs to find out only selected applications/serves that are hitting high processor utilization.
  • For every server, the threshold is different.
  • You want to control which serves to be queried.
  • You want to dynamically update thresholds for different computers.
  • You don’t want to update the KQL query.
Note:

  • For this demo purpose, we are using a log analytics workspace provided by Microsoft in their documentation for KQL/Kusto language. Please access the demo logs here for free.
  • To display the storage role, we have created a storage account in our subscription and blob container to host some files.

Prerequisites

  • Azure Subscription
  • Azure Storage Account (Blob Container) and/or AWS S3 Bucket
  • Azure Log Analytics Workspace (Azure Monitoring Service)

Simple Method For Basic Use Case

Let’s first explore the sample data available for the demo.

  1. Open your favorite browser and go to thislink.
  2. If you are already logged into Azure, it will open directly, else it will ask you to sign in.
  3. After signing in, a new query window is displayed.Microsoft Azure Monitoring Logs-DemoLogsBlade
  4. On the left side of the panel, you can explore tables and queries available in the demo workspace.Microsoft Azure-Logs Demo
  5. Copy & Paste the following code into new query 1.
InsightsMetrics
| where TimeGenerated > ago(30m)
| where Origin == “vm.azm.ms”
| where Namespace == “Processor”
| where Name == “UtilizationPercentage”
| summarize avg(Val) by bin(TimeGenerated, 5m), Computer //split up by computer
| join kind=leftouter (Computers) on Computer
| where isnotempty(Computer1)
| sort by avg_Val desc nulls first

Microsoft Azure-Sample Query

6. Click Run and you will see the following output.Microsoft Azure-Sample Query Results

Query only selected computers from KQL

1. Update the query to add a static list of computers from which you want to query logs.

let Computers = datatable (Computer: string)
[
“AppFE0000002”,
“AppFE0000003”,
“AppFE00005JE”,
“AppFE00005JF”,
“AppFE00005JI”,
“AppFE00005JJ”,
“AppFE00005JK”,
“AppFE00005JL”
];
InsightsMetrics
| where TimeGenerated > ago(30m)
| where Origin == “vm.azm.ms”
| where Namespace == “Processor”
| where Name == “UtilizationPercentage”
| summarize avg(Val) by bin(TimeGenerated, 5m), Computer //split up by computer
| join kind=leftouter (Computers) on Computer
| where isnotempty(Computer1)
| sort by avg_Val desc nulls first

 

2. Run the query and you will get the following results.Microsoft Azure Query Logs-Results

3. Please make a note of the output, which is filtered only for target computers.

This is basic KQL.

Alternative Method For Enhanced Use Case

Store target computer details outside the query

(We have used a .csv file for the demo)

  • Create a simple .csv file and store it on Azure Storage Blob Container.KQL Demo KQL Demo Azure Storage Blob Container

Create SAS Token for delegated access to the container

1. We will supply delegated access to the storage container by creating a SAS token. Using this token, we can access the .csv file from the KQL query.Creating SAS Token

Read more on granting limited access to Azure Storage resources using shared access signatures (SAS).

2. Update query to pull details from .csv file and return computers that are marked “Yes” to monitor.

let Computers = externaldata(Computer: string, value: int, Monitor: string)[@’https://kmpsblogdemostorage.blob.core.windows.net/kqldemo/SelectedComputers.csv’ h@’?sp=r&st=2022-08-23T13:50:14Z&se=2022-08-24T13:50:14Z&spr=https&sv=2021-06-08&sr=c&sig=fJwl%2BTddL6lXV9WONj6bmvp61PDPbf94Ou%2Fp9pAtnYE%3D’] with (ignoreFirstRecord=true);
InsightsMetrics
| where TimeGenerated > ago(30m)
| where Origin == “vm.azm.ms”
| where Namespace == “Processor”
| where Name == “UtilizationPercentage”
| summarize avg(Val) by bin(TimeGenerated, 5m), Computer //split up by computer
| join kind=leftouter (Computers) on Computer
| where isnotempty(Computer1) and Monitor contains “Yes”
| sort by avg_Val desc nulls first

3. Run the query and you will get the following results.

Azure Blob Storage-Query

4. Now, toggle the Monitor condition to No for AppFE0000002 computer.

changed to

5. Update query to pull details from .csv file and return computers that are marked “NO” to monitor.

let Computers = externaldata(Computer: string, value: int, Monitor: string)[@’https://kmpsblogdemostorage.blob.core.windows.net/kqldemo/SelectedComputers.csv’ h@’?sp=r&st=2022-08-23T13:50:14Z&se=2022-08-24T13:50:14Z&spr=https&sv=2021-06-08&sr=c&sig=fJwl%2BTddL6lXV9WONj6bmvp61PDPbf94Ou%2Fp9pAtnYE%3D’] with (ignoreFirstRecord=true);
InsightsMetrics
| where TimeGenerated > ago(30m)
| where Origin == “vm.azm.ms”
| where Namespace == “Processor”
| where Name == “UtilizationPercentage”
| summarize avg(Val) by bin(TimeGenerated, 5m), Computer //split up by computer
| join kind=leftouter (Computers) on Computer
| where isnotempty(Computer1) and Monitor contains “No”
| sort by avg_Val desc nulls first

6. Run the query and you will get the following results.

Note: Replace the location code in the above query with the URL of your Azure blob container.

Access input file stored outside Azure Storage

  • Update the query with the input file on the AWS S3 container. Run the query and you will get the same result.
let Computers = externaldata(Computer: string, value: int, Monitor: string)[@’https://psi-testing.s3.ap-south-1.amazonaws.com/SelectedComputers.csv’] with (ignoreFirstRecord=true);
InsightsMetrics
| where TimeGenerated > ago(30m)
| where Origin == “vm.azm.ms”
| where Namespace == “Processor”
| where Name == “UtilizationPercentage”
| summarize avg(Val) by bin(TimeGenerated, 5m), Computer //split up by computer
| join kind=leftouter (Computers) on Computer
| where isnotempty(Computer1) and Monitor contains “Yes”
| sort by avg_Val desc nulls first

Note: Replace the location code in the above query with the URL of your AWS S3 container.

Read more to know how to access the AWS S3 bucket.

Key Takeaways

The methods explained above offer the following benefits:

  • Provides flexibility to change target resources without updating the actual query.
  • Provides convenience to update input variables without complicating the query.
  • An overall query is compressed by decoupling target resources and threshold values that are defined outside the KQL query.
  • Most importantly, you can host your input file in any publicly accessible location, and still achieve the same functionality.

 

How outsourcing product development can give businesses a competitive edge?

How outsourcing product development can give businesses a competitive edge?

As competition continues to soar with so many new products dominating the market, the pressure to stay ahead of the curve increases. To be a key differentiator, one needs to innovate and develop products that can fit the audience’s requirements in the best way possible. Several factors affect the success of a product. If you are a decision-maker of your enterprise, you will understand that the whole idea is to maximize growth and innovation by delivering the best yet cost-effective products. The uniqueness of your product gives you an extra edge over others. However, developing a top-quality product that can shoot your brand up in the market requires a lot of effort and attention. At the same time, prior commitments, deliverables, strict timelines, and budget constraints can delay the entire process. This is where the role of outsourcing product development emanates. 

Before we dive deeper into how outsourcing product development and services can simplify all your design and delivery needs, read on to understand the process and spare a minute to analyze on your own.

What are we going to cover in this blog?

  1. What is product outsourcing? And why do you need it?
  2. What are the risks of outsourcing product development?
  3. Why product development outsourcing is the right step for you?
  4. What are the benefits of outsourcing product development?
  5. Companies who have opted for outsourcing services
  6. Are you ready to make the right choice?

What is product outsourcing? And why do you need it?

Product development involves three significant stages, design, development, and delivery. In the old school method, businesses used their in-house staff to manage their products from incubation to launch. They worked in several areas like market research, conceptual design, product engineering, prototyping, and other developments. Besides this, the process also included additional expenses and time invested in installing and maintaining different software. This also required constant follow-ups with the team for timely updates and changes on the various stages of the product development process.

Outsourcing product development means collaborating with an external team to turn your product ideas into reality. The outsourcing product development companies help businesses build new products from a different perspective and provide them with services including custom software development, UX design, product manufacturing, and many more. All these services are optimized by the latest technology and strategies along with their exclusive features. In simpler terms, they take some load off your shoulder and unburden you from the tedious follow-ups, iterations, and expenses while streamlining the process.

What are the risks of outsourcing product development?

Even though outsourcing product development has become a common trend these days, many businesses still opt for traditional in-house development assuming there’s risk and carrying other misconceptions associated with it. They fear lack of coordination, flexibility, and control, breach of security, loss of intellectual property, and dilution of the company’s essence if an external agency is involved. We are not saying these apprehensions are entirely false. There are pros and cons of outsourcing product development. However, one must remember that business involves risk irrespective of in-house or external services. It is always about making the right choices that make one stand out from the rest of the crowd. Choosing a credible outsourcing company can certainly eliminate these risks and lead to successful ventures.

Why product development outsourcing is the right step for you?

The success stories of multi-billion products like Slack, Github, Whatsapp, Groove, and several startups, product companies, enterprises, and businesses reinstate the fact that outsourcing product development works the best. It has been gaining a lot of popularity lately, especially after the remote work culture was embraced post the onset of the global pandemic. 

As per Deloitte’s 2020 Global Outsourcing Survey, ‘Outsourcing will remain an essential tool for client organizations to support their strategic goals.’

But how do you decide whether outsourcing or in-house product development is the right fit for you? These key benefits of outsourcing will help you choose the correct option.

Benefits of outsourcing product development | PrimeSoft Solutions Inc.
Benefits of outsourcing product development | PrimeSoft Solutions Inc.

What are the benefits of outsourcing product development?

Here are some of the benefits that make it count:

  1. Reduces cost without compromising the quality One of the significant advantages of outsourced product development is it cuts down some of the major expenses in businesses, specifically the 3RS – Resource costs, research costs, and rework costs. It takes away the cost and efforts incurred in creating a productive team which involves identifying the right talents with specific skill sets who understand the process and tools. Outsourcing optimizes the resource cost as it gives you access to a team of highly efficient resources to work on your project. Similarly, building software requires a lot of research with updated knowledge about the latest technologies and processes. Subscribing to outsourcing product development services allows you to save on the research cost as it brings you a team of people who come with their expertise in the different domains and ensure the best end results. A supplementing team of resources with relevant expertise within the outsourced organization and mature processes will cut down the rework cost.
  1. Saves time and effort for you to focus on core business – In-house development can take months or even years to set up the required equipment and manufacture the product. Every stage of development can be time-consuming, which ultimately delays the process overall. This elongated method can be curtailed with the help of outsourcing. It accelerates the product development process and powers it with quintessential features, enabling you to focus on the core business. 
  1. Ensures growth and innovation with expert assistance – It gives you the scope to work with world-class experts whose vision and expertise can help you innovate and grow. These leading product developers will manage multiple diverse activities, from product design (UX), building technology architecture, product management, development, testing, and quality assurance, to monitoring growth and sales.
  1. Doesn’t take away the control from you – It is often believed that roping in an external agency will take away the control from you. No, that is not true. You are still in charge of the project! It is a collaborative process where transparency is maintained between the vendor and service provider. 
  1. Awareness of the latest technology and trending strategies – Developing a product can take your business to the market, but to rank at the top, it must be equipped with unique attributes which can enhance the overall user experience. Outsourced product development powers your product with the best technology and strategies, keeping in mind the best marketing practices.
  1. Can be outsourced at any point in time – Imagine you want to execute a brilliant product idea but lack the right resources and team. OR you have already started the product development process but somehow feel stuck mid-way as you don’t see any progress. Shed your worries as you can easily outsource product development at any stage and run your development process seamlessly.

Why do companies choose to outsource?

Outsourcing is the trend these days. Outsourcing work to other agencies has become one of the most preferred options for companies. It helps them save money, effort, and time on a lot of additional projects allowing them to focus more on their core business.  

In simpler words, outsourcing is a process of performing a business endeavor outside of the organization.

The work rendered by outsourcing companies varies from industry to industry. The concept of ‘one solution for all problems or industry’ doesn’t exist here.

Every outsourcing company tailors its services as per the unique requirements, goals, and vision of clients. The outsourcing journey is not the same for every company.

Outsourcing IT services has become more accessible and cost-effective due to the powerful communication and collaboration tools used today. These service providers build the product taking into account the unique needs of the client, and deliver high-quality software applying creativity, innovation, and the latest technology in immeasurable ways. This allows vendors and clients from different backgrounds and expertise to connect and create something new and exclusive.

These organizations are big names in the industry but have successfully implemented the culture of outsourcing. They have understood the fundamental truth that the most effective way to grow faster and save money is to strategize the functions in a way that all aspects of it are being given equal attention. They have decoded this simple trick – internally, they can focus on the core business and outsource the non-core functions. Many companies have even taken the step of outsourcing their core business by associating with a credible service provider. Thus, it has become an essential component of any successful business strategy.

Companies who have opted for outsourcing services

Outsourcing has become a standard approach for businesses of all sizes, including some popular ones. Let’s see some of the big names that outsource.

  1. Google – Yes, you read that right! The first thought that comes to our mind is why an organization as massive as Google would prefer outsourcing when they can clearly set up their own internal team for the same. It is no surprise that they are one of the biggest companies in the world. They have become almost synonymous with the internet itself as people use its name as a verb when talking about searching online. Google is a technology company and is known for its exceptional business practices and policies. They have been outsourcing services for the Admin, and IT work for years now.
  1. WhatsApp – It has become one of the most followed modes of communication for people from all walks of life. Millions of people worldwide use Whatsapp on a daily basis for most of their communication. Looking at the huge demand, Whatsapp has recently ventured into the online payments feature to benefit customers further. WhatsApp wasn’t that big a company when they had started its business with limited manpower. They were quick enough to understand the benefits of outsourcing and decided to go with it, which helped them grow their business without impacting their budget.
  1. GitHub – It is one of the most known and used tools for developers and engineers. It instantly became their favorite repository to access data, look for documents, and share and host private code. Even though it was one of the best tools to go for while sharing details on the entire project, it wasn’t still apt for small code texts. To meet these challenges, GitHub came up with another idea, which they named ‘Gist.’ Though they were ready with the concept and its functionalities, they lacked enough time, capital, and resources to work on developing it. That’s when they decided to outsource the services of a developer to build it.
  1. MySQL – The practices and processes of MySQL have made them stand out from its competitors. They had clearly understood that they needed to adopt innovative strategies to compete and win. They decided to implement an outsourcing strategy from the first day of their inception. Even today, they continue with the same practice and have an almost fully outsourced development team comprising operational staff distributed around the world. It is a widely used relational database management system (RDBMS) compatible with several operating systems, including Linux, Solaris, macOS, Windows, and FreeBSD.

Are you ready to make the right choice?

One of the myths about outsourcing product development services is that they are accessible to only larger organizations. That’s not true. They are a suitable option for start-ups or SMEs and can positively impact streamlining their businesses. But before you decide to go ahead with outsourcing product development services, you must analyze a few critical questions. What do I want to derive from this collaboration? In what ways are these services going to benefit me? Do they fit our vision? If you are still unsure, you can scroll up the page and give the blog a quick read. It will certainly bring clarity and help you in decision-making.

With us, you can build a top-quality product by leveraging the expertise, processes, and perspectives of our team which comes with years of experience in working across several domains. Selecting the right service provider to collaborate with can help you take your product to the next level. Primesoft is a global IT service provider with expertise in Product Development, Cloud and DevOps, and Quality Assurance. Our in-house industry experts can guide you to make more informed choices and serve your unique needs. Please feel free to reach out to us with your ideas in the comments below. You will hear from us at the earliest.