AUTOMATED PERFORMANCE TRACKING FABIAN FRANZ

Fabian Franz aka Fabianx, @fabianfranz
 Senior Performance Engineer / Technical Lead

@Tag1 Consulting Inc.

C O R E C O N V E R S AT I O N S

1. Motivation

Why does Drupal need this?

Why does Drupal need this? Performance is a very important factor in todays web-sphere

Sub 1.0 s goal for mobile

Google Penalty for slower pages

Studies showing users buying less if pages load too long.

Current State

Performance Gate Great gate, but only works based on individuals motivation.

Cannot be enforced, because it cannot be measured

=> BOOM

Current Performance Tracking How long does the test suite take?

Is ab -n on average the same?

Only for certain obvious issues, e.g. theme layer.

=> Missing all real and hidden performance regressions.

Current Performance Tracking https://www.drupal.org/project/webprofiler

Current Performance Tracking https://www.drupal.org/project/webprofiler

Current Performance Tracking Only for Drupal Core

Clients sites can be _way_ different

e.g. drupal_find_theme_functions() taking 300 ms on core and 3.5s on a client site

Interesting Questions Is Drupal 8 slower than Drupal 7 at the back-end?

What about front-end performance?

How many database calls does Drupal 8 make per request?

How many bytes are transferred on average?

Automating the process

Backend-Automization XHProf-Kit + XHProf-Lib

Measured every single patch for the twig theme conversions

Revealed lots of performance problems elsewhere, too

=> Still very painful and hard work :-(

Backend-Automization

Have to manually setup a scenario:

e.g. 50 nodes with 20 comments each

Backend-Automization

=> Scenario needs to match what changed in the code

=> Need common cases of scenarios

Backend-Automization

For theme to twig conversions:

=> sometimes very difficult to find page to measure

Questions to Ask

Questions to Ask

How can we make this simpler?

Can it be as simple as running the test suite?

Can we combine it with the test suite?

2. What is automated performance tracking?

How can performance be tracked?

Types of Performance Tracking Front-End

Back-End

Database

Remote Services Overhead

-- Scalability

Types of Performance Tracking Not linear, but a tree structure:

Frontend -> Backend -> DB

Frontend -> Backend -> Remote Services

Frontend -> Page Assets (JS/CSS, Images)

What to track? Time / Speed / Performance (raw)

Painting the page takes 10 ms

BlockController::whoOnline() takes 560 ms

Database Request takes 300 ms

What to track? Time / Speed / Performance (average)

Executing the JS takes on average 50 ms

Route user.page takes on avg. 780 ms

Memcache requests take 1.15 ms on avg.

What to track?

Quantitative, e.g.:

Number of bytes transferred

Number of function calls

What to track?

Quantitative, e.g.:

Number of Cache Hits (Render Cache)

File system accesses

What to track?

Quality:

Quality of aggregates

=> Difficult to define

Front-End

Front-End: Parameters

Time-to-First-Byte

Parse HTML / Parse CSS / Parse JS

Front-End: Parameters

Execute Javascript

Document.Ready() -> More JS

Window.Load() -> More JS

Front-End: Parameters First load time - no assets cached

Cache Hits vs. Cache Misses (JS/CSS files)

=> Quality of Aggregation

Number of bytes transferred

Front-End: Parameters Perceived performance of a page:

e.g. Facebook Big Pipe

Load time of 3.5s (!) for Frontpage

Due to BigPipe => After 200 ms page is loaded with placeholders.

Try out: ?big_pipe=singleflush

Front-End: How to measure? HAR (Http Archive 1.2 files)

Browser-Timing API

Webpagetest.org / Google Page Speed

Front-End: Need more tools! NEED: Realistic scenarios

WITH: Consecutive requests

To determine e.g. aggregation quality

Back-End

Back-End: Parameters

Page Execution Time

Page Size (Bytes transferred)

Back-End: Parameters

Number of PHP function calls

File system accesses (stat, open, readdir)

Back-End: Parameters

Render-Cache: Cache Hits vs. Cache Misses

Rebuilding time of important caches

Back-End: How to measure? XHProf-Kit / XHProf-Lib

Upload runs online

Apache Benchmark

X Debug

Back-End: How to measure? Symfony Debug Console

strace

NEED: Internal introspection for quality of caches.

XHProf-Kit === baseline-8.x..8.x compared (5173f49dd982a..5172adc3e447f):

! ct : 47,077|47,077|0|0.0%

wt : 396,923|397,915|992|0.2%

cpu : 370,881|370,295|-586|-0.2%

mu : 30,347,736|30,349,184|1,448|0.0%

pmu : 30,472,168|30,475,144|2,976|0.0%

! === baseline-8.x..core--issueno compared (5172ad6a9a6b6..5172ae167ba6d):

! ct : 47,077|47,207|130|0.3%

wt : 396,923|399,899|2,976|0.7%

cpu : 370,881|373,186|2,305|0.6%

mu : 30,347,736|30,440,776|93,040|0.3%

pmu : 30,472,168|30,564,656|92,488|0.3%

!

---

ct = function calls, wt = wall time, cpu = cpu time used, mu = memory usage, pmu = peak mem

Database

Database: Parameters

Average request response time

Number of total queries

Database: Parameters

Grouped by DB operation

e.g. 1% INSERT queries

e.g. 10.000 INSERT queries

Database: Parameters

Number of slow queries

Number of queries without indexes

Database: Parameters

e.g. Query Cache quality

e.g. InnoDB Cache quality

Database: How to measure? Symfony Debug Console

Slow Query log

Percona: pt-query-digest

Database: How to measure? !

MySQL-Tuner.pl

NEED: Further introspection tools

Remote Services

Remote Services: Parameters e.g. Memcache or Remote API call

Average request response time

Number of total requests

Remote Services: Parameters Number of operations per request type

e.g. 40.000 Memcache::set()

e.g. 8% PUT operations

Remote Services: How to measure? XHProf? Works some.

NEED: Introspection tools for remote services

1) Only OSS solutions shown

Scalability

Scalability: Parameters Caveat: Depends heavily on infrastructure

e.g. Usual Large-Scale-Drupal Structure:

SSL termination servers

Varnish Cache servers

Scalability: Parameters Caveat: Depends heavily on infrastructure

e.g. Usual Large-Scale-Drupal Structure:

Web-Heads with Apache + OpCache

1 MySQL master, MySQL slaves

Scalability: Parameters Caveat: Depends heavily on infrastructure

e.g. Usual Large-Scale-Drupal Structure:

Memcache / Redis servers

Apache Solr servers

Scalability: Parameters Maximum memory needed for PHP

MaxClients
 = Server Memory Size / PHP Memory

e.g. 20-24 clients for 8 GB RAM with 300 MB php max_memory_size

Scalability: Parameters Maximum memory needed for PHP

ApacheBuddy.pl script to automatically get the safe MaxClients value

https://github.com/gusmaskowitz/ apachebuddy.pl

Scalability: How to measure? External:

jMeter - load testing

HAR - load testing

behat - load testing

Scalability: How to measure? Heavily differs based on Scenario / Website

e.g. Static page served via Varnish == thousands of pages per second

Authenticated user == 3-4 pages per second

Scalability: How to measure? Core likely does not match most client sites for which scalability matters.

=> Not feasible for Core

3. Problems of Performance Tracking

Installation of Drupal

NEED: A way to install Drupal reliably.

Setting up a scenario

NEED: A way to create users / nodes / terms / files.

devel_generate broken for many versions of Drupal 8.

Measuring speed XHProf-Kit: Huge fluctuations

Not possible with VMs

Requirement: Dedicated Server

Requirement: High number of runs

Problem: Statistics Minimum vs. Average

Need 6 data points per XHProf run of many tries

(Min, Max, Avg, Median, 95 pct., 5 pct.)

How to report data?

XHProf-Kit: Upload runs to a server

Only minimum available online

Should output all 6 data points

How to report data? Front-End: Save HAR files during test suite

Afterwards based on static HTML pages (VTD branch)

Problem: Many similar results

How to report data?

NEED: Core introspection - ???

4. Possible Solutions for Drupal Core

Use the test suite, Luke

Collect data during the test suite

Lots of data generated, lets use it!

Collect data during the test suite Collect per unique request URI (based on hash of request):

Timing

XHProf-run

HTML output and assets

NEED: Collect core introspection data

Collect data during the test suite Problem: Unique request URI?

VTD-Branch: For visual regression testing

DeterministicRandom / Monolithic Clock service

=> Test run has always the same request data

Use the Test Suite Database: Collect slow query log after test suite

Front-End: Collect HAR data after test suite, but fake TTFB based on recorded data

Back-End: Compare XHProf averages per request url

Behat

Behat to the rescue! A behat test suite / scenario setup:

Represents a real user using a real browser

Can install core reliably

Can pretty much create users / articles / nodes / etc.

Behat to the rescue!

Behat can also collect HAR data while it runs

Behat can be used for load testing - when just using the Javascript API

Introspection in Core

Introspection in Core We have the data! Lets use it!

Number of Queries

Average $service request time

Cache Quality

...

Introspection in Core Let’s create the framework contrib can use!

Deterministic Random

time() => Drupal::time()

Collection of statistics

Introspection in Core PerformanceCollector Service for avg./ min./max.?

Cache Quality Service for hits/misses?

Injected into main services

Collect data per request

Report via Symfony Debug Toolbar

Introspection in Core PerformanceCollector Service for avg./ min./max.?

Cache Quality Service for hits/misses?

Injected into main services

Collect data per request

Report via Symfony Debug Toolbar

Introspection in Core Can create a Profiler purely in PHP!

http://kpayne.me/2013/12/24/writeyour-own-code-profiler-in-php/

No extension needed!

Introspection in Core Can create a Profiler purely in PHP!

http://kpayne.me/2013/12/24/writeyour-own-code-profiler-in-php/

Create Button in Symfony Debug Bar

=> Run request again and profile.

Sensiolabs Profiler Free for OSS projects

Has an API, collects data over time

Could be used for automated performance tracking

Introspection in Core How can we get started?

1. Collect the data! - web profiler in core!

2. Report the data!

3. Report the data over time!

Introspection in Core

How can we get started?

Empower Contrib to extend and use the introspection services.

Introspection in Core So much data!

So many possibilities!

Let’s make the performance gate enforceable

At least for Drupal 9!

5. Thank you! Questions?

Contact me Twitter: @fabianfranz

Drupal: Fabianx

Senior Performance Engineer /
 Technical Lead

@ Tag1 Consulting Inc.

WHAT DID YOU THINK? E VA U L AT E T H I S S E S S I O N - A M S T E R D A M 2 0 1 4 . D R U PA L . O R G / S C H E D U L E

THANK YOU!

Automated-Performance-Tracking-PDF.pdf

There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

830KB Sizes 9 Downloads 185 Views

Recommend Documents

No documents