Testing Content Server browse performance (Part 2)

Matthew Barben
Driver Lane

--

This is is part two in my series on Content Server browse performance. For the idea behind the setup of this test have a look at Part 1

Now that the tests have been established. We can now proceed to write the tests using Puppeteer.

Classic UI is straightforward

There are plenty of ways to create a puppeteer script. To get a good first start you can add extensions to your chrome browser like this one https://chrome.google.com/webstore/detail/puppeteer-recorder/djeegiggegleadkkbgopoonhjimgehda). With it recording in the background it tracks your clicks and creates a boilerplate code for you to use.

Recording for Puppeteer

And because of the way that the page is served in classic UI, the clicks are very selectable and reliable.

Smart UI not so much

For the smart UI I had to use a different approach. Yes, click-throughs would work but I would need a better way to determine when the page is loaded.

So the approach changes a little bit — on the first time of loading the enterprise workspace I can use the following to detect that the page has loaded correctly.

await page.waitFor(‘body.binf-widgets > .cs-perspective-panel > .cs-perspective’);

But to click through I would need to perform an attribute lookup to determine the correct link

await page.click(‘a[title=”Performance Testing”);

Now because I have clicked instead of loading a new page — I can now need to wait until the next link is clickable.

await page.click(‘a[title=”50 folders”]’);

Using the attribute title on the anchor link will mean that I can now take this test to a different system and import the same folder/file structure (yay no node ids).

Screenshot taken from Puppeteer

Capturing timings

The next important thing is to pick up the timings. There are two ways I am checking this:

  • First is a start and end of the test (End to End); this show how long it takes to browse through the system (given that in many instances users will browse 2–3 levels deep into a structure).
  • Performance timings on each step as we go through.

For End to End timing, I am using moment.js to capture the start and the end of the test

const testResultData = {
runid: uuid(),
type: ‘SMUI_Fifty_Items’,
steps: [],
start: new moment(),
end: ‘’,
duration: ‘’
};```

And finally, at the end capture the duration of the tests:

testResultData.duration = moment.duration(testResultData.end.diff(testResultData.start));

For capturing timing of the responses from the server on the way through I am using a command similar to this:

let enterpriseWorkspace = JSON.parse(await page.evaluate(() => JSON.stringify(window.performance.timing)));

Using another free node module lowdb I can capture and store the results.

Continue Reading…

Part 3, with code running and tests performed, I look at the results.

Connect with Driver Lane on Twitter , and LinkedIn, or directly on our website.

--

--