This guide covers website and integration testing, including setup, issue reporting with Marker.io, bug writing, issue types, prioritisation, and client feedback.
Overview
Web-based testing of our websites, web apps, mobile apps and CMS should go through the following stages:
- Define & Maintain Test Scripts - Solutions
- Set-up Environment & Demo Data - CRM
- Set-up Marker.io - CMS (Web Only) / Set-up Sentry - Custom Dev (Integration/Custom Dev Only)
- Book in and Brief Internal Testing - PM
- Execute Test Scripts - Tester
- Raise Issues - Tester
- Resolve Issues - CMS/CRM/Custom Dev
- Retest resolved issues - Tester
- Book in and Brief UAT - PM
- Execute Test Scripts - Client
- Raise Issues - Client
- Prioritise Issues for resolution / identify changes - PM
- Resolve Issues - CMS/CRM/Custom Dev
- Retest resolved issues - Client
- Final report - PM
- Go/No Go - PM
Test Scripts
Completed by Solutions
Define Objectives
Identify what you want to test (e.g., a specific feature, user journey, or functionality).
Set clear goals for the test (e.g., ensure the checkout process is error-free).
Understand the Application
Familiarise yourself with the website's layout, features, and user flows.
Note any areas of complexity or potential vulnerabilities.
Identify Test Cases
Create a list of actions and scenarios to test (e.g., logging in, completing a purchase).
Include both typical user behaviours and edge cases.
Write Test Steps
For each test case, write step-by-step instructions on how to execute it.
Include expected outcomes for each step.
Include Test Data
Specify any data needed to execute the test (e.g., user credentials, product details).
Ensure this data is representative and diverse.
Add Demo Data
Completed by CRM
Create Demo Data as per the Test Scripts.
Ensure this data is representative and diverse.
Ensure that data is created in a similar way to how it will be done by end users.
Ensure there are enough records for internal testing and client testing and retesting.
Tag all test records as a test, and ensure titles are clearly labelled as a test.
Set-up Marker.io
Completed by CMS
Install the Marker.io widget on your website.
Link the widget to your GitHub repository for direct issue tracking.
Set-up Sentry
Completed by Custom Dev
Add the sentry code on the integration (should be already there if using the cookie-cutter)
Book in and Brief Internal Testing
Completed by PM
Book in Tester.
Brief in Scripts (with Solutions and CMS where applicable).
Ensure they have access to Maker.io, GitHub, HubSpot Portal and any other access requirements, e.g. memberships.
Ensure they have access to the relevant environments and devices.
Execute Test Scripts & General Testing
Completed by Tester
Set Up Environments
Prepare the testing environment (e.g. browsers (Chrome, Edge, Firefox), device type).
Ensure it matches real-world user conditions.
Install the Marker.io Browser Extension.
General Testing
Check out the perfect Bug Report from Marker.io.
A simple checklist of the essential items to include in your bug reports.
Bookmark this. Your developer will love you.
-
Functionality Testing:
- Check all links (internal, external, mail-to:
- Including header & footer.
- Links open appropriately (same window, new window).
- Links go to the appropriate page.
- Check all links (internal, external, mail-to:
-
- Validate forms:
- Required values.
- Default values.
- Error messages.
- Double entry.
- Valid values (email, phone).
- Submit every character in every field.
- Submit code, check if it is captured or stripped out.
- Progressive profiling (if used).
- Prefilled data on reuse.
- Validate forms:
-
- Test cookies (if used):
- Login sessions.
- Personalisation.
- Validate multi-page processes:
- Click back and forward browser functions.
- Click back and forward browser functions while submitting the form in between back and forward actions.
- Clicking previous, next and step navigation (where applicable).
- Clicking previous, next and step navigation while submitting the form between navigation (where applicable).
- Close and reopen the tab.
- Refresh tab.
- Test cookies (if used):
- Usability Testing:
-
- Assess navigation (menus, buttons, links).
- Ensure content readability and clarity.
- Check consistency of design and layout.
- Compatibility Testing:
-
- Browser compatibility (Chrome, Firefox, Edge - latest versions, but not Betas).
- Operating system compatibility (Windows, macOS).
- Mobile browsing (resize each window, check in mobile, tablet, 1920x1080 and very wide screen, check interface is finger friendly mobile-specific features).
-
Performance Testing:
- Page load times under various conditions.
-
- Check site behaviour under different internet speeds.
-
Security Testing:
- Validate SSL certificates.
- Test for SQL injection, XSS, and other common vulnerabilities.
- Check for secure data transmission (login, payment information - where applicable).
-
SEO Testing:
- Ensure proper use of meta tags.
- Check for alt text in images.
- Validate HTML/CSS for SEO friendliness.
- Run HubSpot tests.
-
Content and Accessibility Testing:
- Verify all text for spelling and grammar errors.
- Verify spelling for the region.
- Verify date and number formats are accurate.
- Ensure compliance with accessibility standards (WCAG, ADA).
- Test for screen reader compatibility and keyboard navigation.
Run the script, logging results and issues encountered.
Regularly update the script to reflect changes in the application or new findings.
Use the Marker.io widget to report issues, automatically capturing key details.
Issues are directly created in GitHub, complete with screenshots and browser info.
Note that Marker.io will capture screenshots, video of the last 30 seconds, device information, console logs and URL.
You should mark up on the screenshot where the issue is, describe the issue, categorise and prioritise the issue and provide any other details that are not on screen.
Note you will need to be logged in to Marker.io to log additional issue categorisation and details.
Issue Types
- Functional: Broken links, errors in forms, feature malfunctions.
- Usability: Navigation problems, unclear content, UI design flaws, layout shifts, blurry images, misspellings, incorrect formatting (e.g. dates, currency).
- Performance: Slow load times, unresponsive elements.
- Security: Vulnerabilities, data breaches.
- Compatibility: Cross-browser, cross-device issues.
Prioritising Issues
- Blocker: Completely halts development or critical operations. Needs immediate resolution.
- Critical: Causes system crashes or data loss. Requires urgent attention.
- High: Severely impacts functionality or performance, no system crashes.
- Medium: Affects functionality, but there are workarounds or it's not frequently encountered.
- Low: Minor issues, limited impact on functionality and user experience.
- Trivial: Very minor, typically cosmetic issues with no impact on functionality.
- Change Request: Not a bug, but a request for new features or enhancements.
Resolving Issues
Working from the highest priority, developers will assign issues in GitHub to themselves.
Once work is started on the issue the status is updated to in progress.
Comments and status updates are added to the issue in GitHub as required.
On resolution, the status is put to 'Ready for Testing' and a comment to the Tester or PM (if raised by the client) for retesting.
Retesting & Resolution
The tester or PM retests the issue and reruns the relevant area of the test script to ensure no other issues have arisen from the test.
If a client raises an issue, on successful testing, the PM will ask the client to confirm the resolution.
Retested and resolved items will be set to 'Resolved' by the Tester/PM in GitHub.
Any confirmation from the client will be added as a comment by the PM in GitHub.
Any items that fail retesting, will be set as 'Re-opened' with comments on why by the Tester/PM in GitHub.
Ongoing Issue Reporting
The PM should monitor ongoing progress in GitHub, ensure new items are being prioritised and worked on, ensure areas that require additional feedback are followed up with the Tester/Client, and raise changes where required and feedback to the client.
A final report on total raised, resolved, deferred and won't do items should be provided at the end of testing to support Go/No Go decisions and Launch.