Fig. 7. Effect of data size on transfer learning's performance in terms of (a) NPRE and (b) accuracy. which was an Intel core i9-9900k workstation running Ubuntu 18.04 with 64 GB of RAM and Nvidia GeForce RTX 2080 as GPU. As a reference point, it took 17 hours to train Deme with the KING dataset. Please note that this is training time, which happens only once. The prediction itself is instantaneous. Table 2 lists the results, including accuracy and the Ninetieth Percentile of Relative Error (NPRE). NPRE is commonly used by researchers to report network delay measurement efficiency. As an example, an NPRE of 0.33 means that 90% of the relative errors (Actual-Predicted)/Actual are below 0.33. It is evident that the accuracy of Deme with transfer learning on the expanded dataset is the best, at 93%, and the only one above the required 90% accuracy. It is also interesting to note that Deme with no changes actually performed better than Deme trained form scratch with the new data. The reason is that deep learning models require a large number of data points to achieve good performance, usually in the millions. Since the Swarmio dataset is quite small compared to KING, the training from scratch did not achieve good results. Fig. 6 presents the distribution of the relative errors for all approaches with the expanded Swarmio dataset.