DLM Dashboard tracks your database schemas and alerts you when they change.
DLM Dashboard monitors up to 50 of your databases, and sends you an email alert as soon as your databases start to drift, or change from their expected state.
SQL Search is a free add-in for SQL Server Management Studio that lets you quickly search for SQL across your databases.
Drawing a distinction between test automation and continuous testing may seem like an exercise in semantics, but the gap between automating functional tests and executing a continuous testing process is substantial.
Any true change initiative requires the alignment of people, process and technology—with technology being an enabler and not the silver bullet. Yet there are some basic technology themes we must explore as we migrate to a true quality assurance process. In general, we must shift from a sole focus on test automation to automating the process of measuring risk. To begin this journey, we must consider the following:
With quality assurance (QA) traditionally executing manual or automated tests, the feedback from the testing effort is focused on the event of a test passing or failing—this is not enough. Tests are causal, meaning that tests are constructed to validate a very specific scope of functionality and are evaluated as isolated data points. Although these standalone data points are critical, we must also use them as inputs to an expanded equation for statistically identifying application hot spots.
The SDLC produces a significant amount of data that is rather simple to correlate. Monitoring process patterns can produce very actionable results. For example, a code review should be triggered if an application component experiences all of the following issues in a given continuous integration build:
The ping-pong between testers and developers over the reproducibility of a reported defect has become legendary. It’s harder to return a defect to development than it is to send back an entrée from a world-renowned chef. Given the aggressive goal to accelerate software release cycles, most organisations will save a significant amount of time by just eliminating this back and forth.
By leveraging Service Virtualization for simulating a test environment and/or virtual machine record and playback technologies for observing how a program executed, testers should be able to ship development a very specific test and environment instance in a simple containerised package. This package should isolate a defect by encapsulating it with a test, as well as give developers the framework required to verify the fix.
The current tools and infrastructure systems used to manage the SDLC have made significant improvements in the generation and integration of structured data (e.g., how CI engines import and present test results). This data is valuable and must be leveraged much more effectively (as we stated above in the “From Causal Observations to Probabilistic” section.
The wealth of unstructured quality data scattered across both internal and publicly-accessible applications often holds the secrets that make the difference between happy end users and unhappy prospects using a competitor’s product. For example, developers of a mobile application would want constant feedback on trends from end user comments on:
This data is considered unstructured since the critical findings are not presented in a canonical format: parsing and secondary analysis are required to extract the valuable information. Although these inputs might be monitored by product marketers or managers, providing these data points directly to development and testing teams—in terms that practitioners can take action on—is imperative.
In a Continuous Everything world, quality gates will enable a release candidate to be promoted through the delivery pipeline. Anything that requires human validation clogs the pipeline. Dashboards require human interpretation—delaying the process.
Dashboards are very convenient for aggregating data, providing historical perspectives on repetitive data, and visualizing information. However, they are too cumbersome for real-time decision making because they do not offer actionable intelligence.
Business policies help organisations evolve from dashboards to automated decision making. By defining and automatically monitoring policies that determine whether the release candidate is satisfying business expectations, quality gates will stop high-risk candidates from reaching the end user. This is key for mitigating the risks inherent in rapid and fully-automated delivery processes such as Continuous Delivery.
Let’s face it—it’s cheap to run tools. And with the availability of process intelligence engines, the more data observations we can collect across the SDLC, the more opportunities will emerge to discover defect prevention patterns.
Given the benefit of a large and diverse tool set, we need to shift focus from depending on a single “suite” of tools from a specific vendor (with a specific set of strengths and weaknesses) to having a broad array of SDLC sensors scattered across the software development life cycle. And to optimise both the accuracy and value of these sensors, it’s critical to stop allowing tools to be applied in the ad hoc manner that is still extremely common today. Rather, we need to ensure they are applied consistently and that their observations are funneled into a process intelligence engine, where they can be correlated with other observations across tools, across test runs and over time. This will not only increase the likelihood of identifying application hot spots, but will also decrease the risk of false negatives.
You can learn more about evolving from automated testing to Continuous Testing. Read the new 70-page book Continuous Testing for IT Leaders to learn how Continuous Testing can help your organisation answer the question “Does the release candidate have an acceptable level of business risk?”
You’ll learn how to:
This book provides a business perspective on how to accelerate the SDLC and release with confidence. It is written for senior development managers and business executives who need to achieve the optimal balance between speed and quality.
TrapX Security™, a global leader in advanced cyber security defense, today announced that Blue Turtle, leaders in solutions for optimization and management of IT systems, has chosen TrapX DeceptionGrid™ to expand its security service portfolio. The partnership helps solidify TrapX’s growth in South Africa by providing Blue Turtle’s customers with the industry’s leading deception-based technology.
DeceptionGrid automates the deployment of a network of camouflaged malware traps that are intermingled with real information technology resources. If malware touches DeceptionGrid just once, it sets off a high-confidence alert. Real-time automation isolates the malware and delivers a comprehensive assessment directly to an organization’s security operations team.
“We offer a comprehensive network security portfolio of products to our customers,” said Martyn Healy, Marketing Director at Blue Turtle. “But, as we’ve seen in recent months, there’s always going to be some element of risk even to organizations that have bottomless pockets and spend untold millions attempting to build a fortified network perimeter. We are excited to partner with Trapx, as we believe that TrapX DeceptionGrid seeks to offer an extra layer of protection and mitigation that’s been proven reliable in case an attacker does manage to penetrate our perimeter defenses.”
“The fact is there is no one foolproof way to protect an organization’s data against aggressive attackers and crime syndicates. It’s no longer a question of ‘if’ a large organization has been penetrated, but ‘when,” said Carl Wright, General Manager of TrapX Security. “What’s important is a layered security approach that includes a fully-featured firewall, endpoint and deception protection. A properly configured network security stack with DeceptionGrid substantially reduces the time to breach detection and practically eliminates false positive alerts, which is one of the biggest complaints coming out of IT departments today. We are pleased to be offering Blue Turtle’s customers peace of mind that our software will help protect them from the latest malware and advanced persistent threats.”
Myth Busters! Have you ever received conflicting information about DCIM? How about wrong information? Data center infrastructure management (DCIM) is a complex topic, so it’s no wonder that misperceptions and spin-doctoring are common.
Register for Nlyte Software’s educational webinar to find out more on DCIM myths, then discuss the pertinent facts and steps to take when faced with a DCIM myth.
Webinar Registration : Common Myths About DCIM – Debunked
Choose a session on Wednesday, April 29:
Session 1 at 15:00 GMT / 10am ET
Session 2 at 2pm ET / 1pm CT / 11am PT
Please select the below courses you would like to register, as well as the number of students you will be registering. A 15% discount is offered for the third delegate if a single customer registers three delegates for a single course.
Please note that invoices will be emailed to the contact email address provided, proof of payment will secure your registration.
If you are experiencing issues submitting this page, or require assistance please email firstname.lastname@example.org.
Blue Turtle partners with Quorum to deliver resilient, reliable, cost-effective onQ On-Site Disaster Recovery services to protect data and applications for enterprises of all sizes.
Blue Turtle, a leading South African technology management company, today announced its partnership with Quorum, the number one provider of one-click backup and disaster recovery (DR) for small to mid-sized business.
Blue Turtle has expanded its infrastructure product portfolio with Quorum’s award-winning hypervisor-based DR solution, onQ On-Site. onQ On-Site is the building block of Quorum’s unique high-availability (HA), DR, and disaster recovery as a service (DRaaS) capabilities. Organisations may choose DRaaS, protecting on-premises applications to the cloud, and in-Cloud DR, protecting applications deployed in the cloud, and replicated to a secondary cloud data centre.
Quorum’s onQ On-Site effectively maintains up-to-date virtual machine clones of critical systems, and transparently takes over failed servers within minutes. Leveraging Quorum technologies, Blue Turtle now offers a simple, fast and cost-effective platform to protect and recover critical IT services for users; whether service interruption is caused by man-made or natural disaster, equipment failure or data loss.
“Disaster recovery is often considered a first step for small to medium enterprises transitioning to a fully outsourced IT model, consisting of co-location, managed hosting, and cloud. The Quorum platform allows organisations to test the viability of this strategy, while meeting critical business continuity objectives,” says Avash Maharaj, Infrastructure Business Manager for Blue Turtle Technologies. “With Quorum’s onQ On-Site offering, enterprises can eliminate months’ worth of cloud compute costs, by maintaining a disaster recovery strategy.”
With Quorum, downtime events are essentially eliminated, ensuring instant asset restoration, so that daily activities remain uninterrupted and business continuity prevails. “Easy to use, cost-effective and PCI-compliant, the Quorum onQ On-Site solution is the best defence against costly downtime events, which are particularly vexing for mid-market government and educational organisations,” said Walter Angerer, CEO at Quorum. “With its Quorum-powered onQ HA solution, customers can rest assured that they have the very best in disaster recovery protecting their critical assets.”
Worldwide disaster recovery and business continuity arrangements look bleak, and South African (SA) enterprises echo these trends. According to an EMC-sponsored survey: “The top three causes of data loss and system downtime in SA are loss of power (56%), hardware failure (51%) and software failure (50%)”1. This is hurting business in loss of employee productivity and loss of revenue. The ability to instantly recover critical data, systems, and applications after a failure or disaster is crucial to the continuity of business operations.
David Fisk, EMEA Sales Director at Quorum, reiterated: “Many organisations today need a highly-available disaster recovery solution to protect critical applications and data, but without the traditional price tag or complications. As a result, Quorum’s highly regarded technology is plugging a gap in the market with its simple to manage and cost-effective platform. In fact, Network Products Guide 2013 Best Products Awards announced Quorum as a finalist for the ‘Best Cloud Storage and Backup Solution’, as this is the only solution in the industry that delivers assured one-click recovery in minutes.”
1 EMC Disaster Recovery Survey 2013: South Africa
Blue Turtle Technologies expands its current security service portfolio with the introduction of a comprehensive phishing risk assessment tool.
Phish5 is a product of 4Di Privaca, a leading provider of Information Security. 4Di Privaca offers a variety of services which seek to minimize companies’ exposure to risk, unnecessary delay, or overtly restrictive controls on online networks. By adding Phish5, Blue Turtle is looking to emphasize security awareness, and to offer a solution that addresses the risk phishing poses to an organization.
Blue Turtle believes that Phish5 will enable businesses to gain an understanding of their phishing risks. As a direct result of understanding these risks, these businesses will be able to improve the performance and operation of secure working environments by disabling the danger that phishing poses.
Simon Shaw, Business Development Manager at Blue Turtle states that “We are excited about adding Phish5, a South African developed product, to our solution set. It is mostly concerned with resolving the risk around the behavior of employees carelessly responding to inappropriate emails and exposing credentials to unauthorized parties. If these risks are not understood, organizations stand to lose not only money, but also risk reputational damage.”
South Africa is the second-most targeted country globally, with 1 in 170.9 emails identified as phishing attacks. Newton Lee, the author of Counterterrorism and Cybersecurity, put it like this: “As the world is increasingly interconnected, everyone shares the responsibility of securing cyberspace.” Running these phishing campaigns, promises a more secure online environment.
 Symantec Intelligence Report, June 2012