Дерево страниц

Сравнение версий

Ключ

  • Эта строка добавлена.
  • Эта строка удалена.
  • Изменено форматирование.

Recently we have performed a survey among our Customers to get more insights on how Ninja is used in real-world Pega projects. We have thoughtfully analyzed results and have identified some patterns on using Ninja and benefits, Customers usually get.

Cases & Profit

Find your case in the table bellow and find out how your project may benefit from adopting Ninja.

ComponentCaseProfitQ: How did you do it before Ninja adoption?Q: How are you doing it with Ninja?
Unit TestingWe have dozens of outbound (connector) integration points.
  • Defects that reach QA reduced by 50%
  • Defect fix cost reduced by 50%
  • Time-to-Market reduced by 20%

A: We wrote end-to-end tests for integration activities which invoked connector to interact with MQ. Tests took long time to run and required complex interaction with MQ stub. Moreover, there were no means to assert outgoing XML for correctness, which was the main objective.

We tried to put connector call inside a data page and simulate it for the test. This worked fast but we were unable to check XML anymore. This resulted in some nasty defects caused by Parse / Stream rules.

A: We have Ninja unit tests that mock connector calls and instead assert outgoing XML and return mock incoming XML response. XML assertion is easy to implement and reliable as Ninja has convenient means for XML and XQuery manipulations.

Tests run fast and check our integration points end-to-end. They don't require MQ stubs and even no MQ infrastructure at all.

We have several inbound (service) integration points which process requests asynchronously (via agent job).

A: We wrote a straighforward test invoking service activity and checking processing result (Work object status change). Test was very fragile as it had to wait until job was performed by the agent.

We also had to setup and maintain a real Work object in the database for each such test.

A: We have complex unit test that mocks call to schedule agent job and then executes it synchronously. Test also mocks opening Work object from database and constructs mock clipboard page instead.

Test is stable and runs fast as no interaction with agent subsystem / database takes place.

We have rules that rely on current date and time.A: We had to skip checking of field / logic branches that relied on current date / time (received via Function rule). It lead to several blocking defects found during QA / integration testing only.A: We covered this cases with unit tests as well, mocking date / time Function rules to return constant predefined values
We have thousands of tests and want to run them quickly.

A: Tests that interact with database or network took several seconds to run. All tests were run sequentially by Pega.

The whole run took several hours which slowed down team reaction time and made Continuous Integration / Delivery hard to implement.

A: Our unit tests do not interact with database or network by convention (such slower tests are used on later stages).

Tests are easily configured to run in parallel.

The whole run now takes only 10 minutes.

Release AutomationOur applications include Pega rules, DB updates, auxiliary (XSD WAR) Java application.
  • Build / deploy cost reduced by 80%
  • Time-to-Market reduced by 20%

A: Deployment procedures for QA / UAT / production were all slightly different (e.g. manual / automatic DB schema update). Word document providing description of numerous manual steps was provided for every delivery.

Deployment incidents in operations were very often (human errors, cumbersome instructions). Incident analysis was complicated by lack of logs for manual steps.

A: We have a single, comprehensive build script for a release which generate executable deployment package.

Deployment to all environments is done running the same executable package (with different customization property files).

Detailed deployment log is available for analysis.

We need to setup integration and business settings to different values on different environments.

A: Manual configuration instructions for each environment were prepared. Deployment involved massive manual configuration steps.

Very often operations team misconfigured some setting like DSS or Authentication Profile, leading to hard-to-find problems.

A: We configure settings like DSS in build script to receive values from customization properties. During deployment time they are set to correct values automatically.

No manual configuration of any rules or data instances is performed during deploy.

Multiple releases are developed on a single dev environment.

A: Lots of product rules existed (product rule hell): cumulative, incremental, data-only, admin data only, etc. One FTE architect was doing only tasks regarding product rules.

Yet quite often rules were missing from a build, wrong application version was set in access groups, etc.

A: All product rules were replaced by a single build script. Advanced script data / rule inclusion instructions allow to minimize amount of times one needs to alter the script.

Access groups are configured to receive correct application version, no matter what version is setup in dev environment during export.

Code ReviewWe have a large (30+) team of architects with expertise and experience varying from LSA to intern.
  • Average review time decreased from 40 minutes to 20 minutes (50%)
  • Defect fix cost reduced by 10%

A: Architect manually prepared list of rule changes and attached it to the task in tracking system. One could accidentally forget to specify some change and it went to release unreviewed, potentially worsening quality or even causing defects.

Reviewer had to open each rule and verify changes to the previous rule snapshot. Discussions occurred in task comments in tracking system, which complicated communication as it was not always clear which change needed to be improved.

A: During check-in architects provide mandatory tracking system task ID. Changes are tracked by Ninja and stored in standard Git repository. Our JIRA (tracking) and Crucible (code review) tools are integrated to it, providing automated changelist creation.

Reviewer sees all changed rules / instances and their changes (in coloured diff format) in convenient interface. Discussions are made in context of particular rule changes in threads.

Our architects often need to thoroughly investigate change history during defect fix or CR implementation.

A: When trying to find who, when and why introduced a particular change architects had to trace rule changes by manually comparing rule forms of change history snapshots – a tedious and time-consuming process.

Changes in non-versioned rules and admin data instances could not be traced at all, and they were often of great importance.

Architects were allowed to make check-ins with non-informative and task-unbound messages like 'test', 'fixed formatting',etc.

A: Change history with Ninja provides changes comparison (in coloured diff format) for most rules and admin data instances. With this instrument it is very easy to detect change event in complex rules.

Non-versioned rules (ruleset, application, etc.) and admin data instances (access group, DSS, etc.) changes are tracked as well – commit message dialog now appears for them.

Commit message now must reference tracking system task, and integration with JIRA enforces consistency and convenience.

Rule RefactoringRefactoring needs to be done on a regular basis. It includes rule contract changes, rule withdrawal/blocking etc.
  • Defects that reach QA reduced by 5%
  • Refactoring cost reduced by 50%

A: Before withdrawal of an obsolete rule architect had to open "Find Usages", then open each referencing rule and make sure it has refactored / withdrawn later version. This was long and error-prone procedure.

Even worse, some references were not automatically detected by "Find Usages" (e.g. calling a Decision Table from a Data Transform via Function rule). Unless found by full-text search by experienced architect, this lead to blocking defects.

A: "Find Usages" now intelligently hides refactored incoming references (i.e. with newer versions withdrawn or not referencing this rule anymore). This greatly simplified and sped up the process.

We trust "Find Usages" more and need to use full-text search less often now, as Ninja introduces numerous improvements in rule dependency analysis.

We have complex components with particular SSAs responsible for their design.A: We relied on convention that every change in such component should be brought to responsible SSA's notice. Some violations of this conventions resulted in expensive defects.A: Our SSAs now subscribe to critical rule changes (via Pega Favourites) and receive notifications automatically.
Code AnalysisWe have lots of complex business and integration logic implemented in Activity / Data Transform rules.
  • Maintenance cost reduced by 20%
A: We published conventions describing Activity / Data Transform implementation best practices and enforced them via Code Review. It brought additional burden on senior team members.A: Important best practices have been moved from manual Code Review to automatic Code Analysis stage, which allowed SSAs to switch to more important tasks.
Continuous DeliveryWe have multiple releases, each with its version of rules, DB scripts, XSD schemas, tests etc.
  • Defects that reach QA reduced by 10%
  • Maintenance cost reduced by 20%
  • Time-to-Market reduced by 20%

A: Pega artifacts, database scripts, XSD files were all managed independently. Team members got confused as to which version of a file / rule they should change and where it needs to be merged.

Consistency of the application was checked only when preparing release for testing phase. Incidents caused by incorrect release / merge were often.

A: We have robust yet simple convention for naming and placing application source artifacts in two main places, Pega and source repository (Git / SVN depending on the project). Every release has its ruleset versions, repository branch for scripts/schemas and so on.

Usage of scripted configuration / build allows for simple merge process supported by repository tools.

Build Server conveniently supports branches for the pipeline, allowing each release to be independently and continuously built, tested and published.