Today, websites use a significant amount of dynamic content like rolling testimonials, A/B tests on pricing or messaging, news & social feeds, ads and banners, etc. Tracking 20 competitors websites, including all relevant webpages (on average 25 per competitor) can require sifting through up to 1,000 updates…every day.
However, most of those daily updates do not represent new strategic opportunities or threats for businesses. Most are just “noise” implemented via the dynamic nature of today’s website CMS systems.
Kompyte’s competitive intelligence automation engine instinctively filters out 99% of the updates, and displays only those which have high strategic value.
Kompyte analyzes every URL like it was a human with a browser, using an in-house engine based upon open-source browsers. Apart from loading a static web page, our engine is able to intelligently analyze, detect and execute the same actions a human would perform on a page such as pressing buttons, opening menus, opening modal windows, accept terms, etc. Executing these human-like actions allows us to identify content that’s not directly visible on the page but that can contain strategic, relevant updates.
How? Kompyte uses a 4-step process to analyze every tracked URL
This process is executed in a distributed manner across hundreds of servers in a highly synchronized and scalable way
Crawler: Web scraping using high-level human simulation
Kompyte analyzes every URL like it was a human with a browser, using an in-house engine based upon open-source browsers. Apart from loading a static web page, our engine is able to intelligently analyze, detect and execute the same actions a human would perform on a page such as pressing buttons, opening menus, opening modal windows, accept terms, etc. Executing these human-like actions allows us to identify content that’s not directly visible on the page but that can contain strategic, relevant updates.
Semantic Predictor: Using Machine Learning to understand the semantics of the content
After obtaining all the content from a specific page, Kompyte uses 3 in-house developed powerful neural networks to analyze every web page from 3 different perspectives (visual, content-based and HTML based). The goal of these 3 networks is to classify semantically the content of the website in order to identify relevant changes.
Change Detector: Multidimensional change detector capable of understanding change patterns
Our engine then uses semantically-classified data to identify the updates that are in strategically relevant web zones. Next, we use a multidimensional comparison algorithm that’s able to compare each page with a number of previous versions, learn its patterns of change and adapt to A/B tests, dynamic content, frequency of new posts, etc.
Semantic Extractor: Analysis of the semantics of change
Using the output of our semantic predictor, in most cases, our algorithm is able to understand the structure of the update, compare it to thousands of previously processed samples and explain it to a user in a clear and easy-to-understand way.


