Node ID







Columns











Log Level





Log Marker








Class


















































node0 0.000ns 2025-10-21 17:47:04.395 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node0 85.000ms 2025-10-21 17:47:04.480 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node0 100.000ms 2025-10-21 17:47:04.495 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 214.000ms 2025-10-21 17:47:04.609 4 INFO STARTUP <main> Browser: The following nodes [0] are set to run locally
node0 219.000ms 2025-10-21 17:47:04.614 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node0 233.000ms 2025-10-21 17:47:04.628 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 458.000ms 2025-10-21 17:47:04.853 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node3 547.000ms 2025-10-21 17:47:04.942 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node3 563.000ms 2025-10-21 17:47:04.958 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 646.000ms 2025-10-21 17:47:05.041 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node0 646.000ms 2025-10-21 17:47:05.041 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node3 678.000ms 2025-10-21 17:47:05.073 4 INFO STARTUP <main> Browser: The following nodes [3] are set to run locally
node3 683.000ms 2025-10-21 17:47:05.078 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node3 696.000ms 2025-10-21 17:47:05.091 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 781.000ms 2025-10-21 17:47:05.176 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node1 870.000ms 2025-10-21 17:47:05.265 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node1 886.000ms 2025-10-21 17:47:05.281 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 1.003s 2025-10-21 17:47:05.398 4 INFO STARTUP <main> Browser: The following nodes [1] are set to run locally
node1 1.009s 2025-10-21 17:47:05.404 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node1 1.023s 2025-10-21 17:47:05.418 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 1.122s 2025-10-21 17:47:05.517 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node3 1.123s 2025-10-21 17:47:05.518 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node2 1.173s 2025-10-21 17:47:05.568 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node2 1.265s 2025-10-21 17:47:05.660 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node2 1.282s 2025-10-21 17:47:05.677 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 1.400s 2025-10-21 17:47:05.795 4 INFO STARTUP <main> Browser: The following nodes [2] are set to run locally
node2 1.406s 2025-10-21 17:47:05.801 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node2 1.421s 2025-10-21 17:47:05.816 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 1.448s 2025-10-21 17:47:05.843 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node1 1.448s 2025-10-21 17:47:05.843 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 1.531s 2025-10-21 17:47:05.926 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 884ms
node0 1.533s 2025-10-21 17:47:05.928 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node0 1.536s 2025-10-21 17:47:05.931 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 1.578s 2025-10-21 17:47:05.973 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 1.651s 2025-10-21 17:47:06.046 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node0 1.653s 2025-10-21 17:47:06.048 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node2 1.841s 2025-10-21 17:47:06.236 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node2 1.843s 2025-10-21 17:47:06.238 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node3 1.996s 2025-10-21 17:47:06.391 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 872ms
node3 1.997s 2025-10-21 17:47:06.392 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node3 2.000s 2025-10-21 17:47:06.395 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 2.037s 2025-10-21 17:47:06.432 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node3 2.096s 2025-10-21 17:47:06.491 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node3 2.097s 2025-10-21 17:47:06.492 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node1 2.530s 2025-10-21 17:47:06.925 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1081ms
node1 2.532s 2025-10-21 17:47:06.927 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node1 2.535s 2025-10-21 17:47:06.930 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 2.578s 2025-10-21 17:47:06.973 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node1 2.639s 2025-10-21 17:47:07.034 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node1 2.640s 2025-10-21 17:47:07.035 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node2 2.727s 2025-10-21 17:47:07.122 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 884ms
node2 2.729s 2025-10-21 17:47:07.124 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node2 2.732s 2025-10-21 17:47:07.127 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 2.777s 2025-10-21 17:47:07.172 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node2 2.839s 2025-10-21 17:47:07.234 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node2 2.839s 2025-10-21 17:47:07.234 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 3.199s 2025-10-21 17:47:07.594 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 3.309s 2025-10-21 17:47:07.704 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 3.329s 2025-10-21 17:47:07.724 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 3.468s 2025-10-21 17:47:07.863 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 3.474s 2025-10-21 17:47:07.869 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node4 3.490s 2025-10-21 17:47:07.885 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node0 3.708s 2025-10-21 17:47:08.103 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 3.791s 2025-10-21 17:47:08.186 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 3.793s 2025-10-21 17:47:08.188 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node0 3.794s 2025-10-21 17:47:08.189 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 3.989s 2025-10-21 17:47:08.384 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node4 3.990s 2025-10-21 17:47:08.385 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node3 4.101s 2025-10-21 17:47:08.496 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node3 4.185s 2025-10-21 17:47:08.580 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 4.187s 2025-10-21 17:47:08.582 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node3 4.188s 2025-10-21 17:47:08.583 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node0 4.583s 2025-10-21 17:47:08.978 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 4.586s 2025-10-21 17:47:08.981 33 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node0 4.592s 2025-10-21 17:47:08.987 34 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node0 4.601s 2025-10-21 17:47:08.996 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 4.603s 2025-10-21 17:47:08.998 36 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.823s 2025-10-21 17:47:09.218 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node2 4.872s 2025-10-21 17:47:09.267 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node1 4.915s 2025-10-21 17:47:09.310 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.917s 2025-10-21 17:47:09.312 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node1 4.918s 2025-10-21 17:47:09.313 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 4.965s 2025-10-21 17:47:09.360 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.967s 2025-10-21 17:47:09.362 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node2 4.968s 2025-10-21 17:47:09.363 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 4.974s 2025-10-21 17:47:09.369 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 4.977s 2025-10-21 17:47:09.372 33 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node3 4.982s 2025-10-21 17:47:09.377 34 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node3 4.992s 2025-10-21 17:47:09.387 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 4.994s 2025-10-21 17:47:09.389 36 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5.187s 2025-10-21 17:47:09.582 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1195ms
node4 5.189s 2025-10-21 17:47:09.584 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 5.192s 2025-10-21 17:47:09.587 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5.233s 2025-10-21 17:47:09.628 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 5.297s 2025-10-21 17:47:09.692 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 5.297s 2025-10-21 17:47:09.692 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node0 5.707s 2025-10-21 17:47:10.102 37 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26646998] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=135840, randomLong=-6976817566446496365, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11910, randomLong=1731826040034677007, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=993588, data=35, exception=null] OS Health Check Report - Complete (took 1022 ms)
node1 5.708s 2025-10-21 17:47:10.103 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 5.711s 2025-10-21 17:47:10.106 33 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node1 5.718s 2025-10-21 17:47:10.113 34 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node1 5.728s 2025-10-21 17:47:10.123 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 5.729s 2025-10-21 17:47:10.124 36 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 5.739s 2025-10-21 17:47:10.134 38 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 5.741s 2025-10-21 17:47:10.136 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 5.744s 2025-10-21 17:47:10.139 33 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node0 5.746s 2025-10-21 17:47:10.141 39 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 5.749s 2025-10-21 17:47:10.144 34 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node0 5.751s 2025-10-21 17:47:10.146 40 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node2 5.759s 2025-10-21 17:47:10.154 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 5.761s 2025-10-21 17:47:10.156 36 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 5.849s 2025-10-21 17:47:10.244 41 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Iixteg==", "port": 30124 }, { "ipAddressV4": "CoAACg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhvR0w==", "port": 30125 }, { "ipAddressV4": "CoAACQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Igmj1Q==", "port": 30126 }, { "ipAddressV4": "CoAAXQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IhwBrA==", "port": 30127 }, { "ipAddressV4": "CoAAAg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+H5iw==", "port": 30128 }, { "ipAddressV4": "CoAABg==", "port": 30128 }] }] }
node0 5.869s 2025-10-21 17:47:10.264 42 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv
node0 5.870s 2025-10-21 17:47:10.265 43 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node0 5.884s 2025-10-21 17:47:10.279 44 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: d479a3eefb4435eeea2b3a460d3043b4384da906e617a7bcad4dde964b87cb9229d0d76f189a4f1b03822abfa4e02f42 (root) ConsistencyTestingToolState / ill-liquid-nurse-saddle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random
node0 6.098s 2025-10-21 17:47:10.493 46 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node3 6.099s 2025-10-21 17:47:10.494 37 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26283149] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=120220, randomLong=-2901203578570221902, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=9520, randomLong=8706275177031578115, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1004960, data=35, exception=null] OS Health Check Report - Complete (took 1019 ms)
node0 6.102s 2025-10-21 17:47:10.497 47 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node0 6.106s 2025-10-21 17:47:10.501 48 INFO STARTUP <<start-node-0>> ConsistencyTestingToolMain: init called in Main for node 0.
node0 6.107s 2025-10-21 17:47:10.502 49 INFO STARTUP <<start-node-0>> SwirldsPlatform: Starting platform 0
node0 6.108s 2025-10-21 17:47:10.503 50 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node0 6.112s 2025-10-21 17:47:10.507 51 INFO STARTUP <<start-node-0>> CycleFinder: No cyclical back pressure detected in wiring model.
node0 6.113s 2025-10-21 17:47:10.508 52 INFO STARTUP <<start-node-0>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node0 6.113s 2025-10-21 17:47:10.508 53 INFO STARTUP <<start-node-0>> InputWireChecks: All input wires have been bound.
node0 6.115s 2025-10-21 17:47:10.510 54 WARN STARTUP <<start-node-0>> PcesFileTracker: No preconsensus event files available
node0 6.116s 2025-10-21 17:47:10.511 55 INFO STARTUP <<start-node-0>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node0 6.116s 2025-10-21 17:47:10.511 56 INFO STARTUP <<start-node-0>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node0 6.117s 2025-10-21 17:47:10.512 57 INFO STARTUP <<app: appMain 0>> ConsistencyTestingToolMain: run called in Main.
node0 6.119s 2025-10-21 17:47:10.514 58 INFO PLATFORM_STATUS <platformForkJoinThread-2> DefaultStatusStateMachine: Platform spent 182.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node0 6.123s 2025-10-21 17:47:10.518 59 INFO PLATFORM_STATUS <platformForkJoinThread-2> DefaultStatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node3 6.130s 2025-10-21 17:47:10.525 38 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node3 6.137s 2025-10-21 17:47:10.532 39 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node3 6.142s 2025-10-21 17:47:10.537 40 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node3 6.222s 2025-10-21 17:47:10.617 41 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Iixteg==", "port": 30124 }, { "ipAddressV4": "CoAACg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhvR0w==", "port": 30125 }, { "ipAddressV4": "CoAACQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Igmj1Q==", "port": 30126 }, { "ipAddressV4": "CoAAXQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IhwBrA==", "port": 30127 }, { "ipAddressV4": "CoAAAg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+H5iw==", "port": 30128 }, { "ipAddressV4": "CoAABg==", "port": 30128 }] }] }
node3 6.241s 2025-10-21 17:47:10.636 42 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv
node3 6.241s 2025-10-21 17:47:10.636 43 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node3 6.255s 2025-10-21 17:47:10.650 44 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: d479a3eefb4435eeea2b3a460d3043b4384da906e617a7bcad4dde964b87cb9229d0d76f189a4f1b03822abfa4e02f42 (root) ConsistencyTestingToolState / ill-liquid-nurse-saddle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random
node3 6.449s 2025-10-21 17:47:10.844 46 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node3 6.453s 2025-10-21 17:47:10.848 47 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node3 6.458s 2025-10-21 17:47:10.853 48 INFO STARTUP <<start-node-3>> ConsistencyTestingToolMain: init called in Main for node 3.
node3 6.458s 2025-10-21 17:47:10.853 49 INFO STARTUP <<start-node-3>> SwirldsPlatform: Starting platform 3
node3 6.459s 2025-10-21 17:47:10.854 50 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node3 6.463s 2025-10-21 17:47:10.858 51 INFO STARTUP <<start-node-3>> CycleFinder: No cyclical back pressure detected in wiring model.
node3 6.464s 2025-10-21 17:47:10.859 52 INFO STARTUP <<start-node-3>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node3 6.464s 2025-10-21 17:47:10.859 53 INFO STARTUP <<start-node-3>> InputWireChecks: All input wires have been bound.
node3 6.467s 2025-10-21 17:47:10.862 54 WARN STARTUP <<start-node-3>> PcesFileTracker: No preconsensus event files available
node3 6.467s 2025-10-21 17:47:10.862 55 INFO STARTUP <<start-node-3>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node3 6.468s 2025-10-21 17:47:10.863 56 INFO STARTUP <<start-node-3>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node3 6.469s 2025-10-21 17:47:10.864 57 INFO STARTUP <<app: appMain 3>> ConsistencyTestingToolMain: run called in Main.
node3 6.471s 2025-10-21 17:47:10.866 58 INFO PLATFORM_STATUS <platformForkJoinThread-1> DefaultStatusStateMachine: Platform spent 163.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node3 6.476s 2025-10-21 17:47:10.871 59 INFO PLATFORM_STATUS <platformForkJoinThread-1> DefaultStatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node1 6.841s 2025-10-21 17:47:11.236 37 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26380689] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=266960, randomLong=7653629945644117380, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=10320, randomLong=-4201038438477636342, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1362760, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms)
node2 6.870s 2025-10-21 17:47:11.265 37 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26350871] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=234670, randomLong=9068736693684802142, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11050, randomLong=4633790964267067246, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1259039, data=35, exception=null] OS Health Check Report - Complete (took 1023 ms)
node1 6.873s 2025-10-21 17:47:11.268 38 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node1 6.880s 2025-10-21 17:47:11.275 39 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node1 6.886s 2025-10-21 17:47:11.281 40 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node2 6.902s 2025-10-21 17:47:11.297 38 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 6.910s 2025-10-21 17:47:11.305 39 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 6.916s 2025-10-21 17:47:11.311 40 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node1 6.970s 2025-10-21 17:47:11.365 41 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Iixteg==", "port": 30124 }, { "ipAddressV4": "CoAACg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhvR0w==", "port": 30125 }, { "ipAddressV4": "CoAACQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Igmj1Q==", "port": 30126 }, { "ipAddressV4": "CoAAXQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IhwBrA==", "port": 30127 }, { "ipAddressV4": "CoAAAg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+H5iw==", "port": 30128 }, { "ipAddressV4": "CoAABg==", "port": 30128 }] }] }
node1 6.991s 2025-10-21 17:47:11.386 42 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv
node1 6.991s 2025-10-21 17:47:11.386 43 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node2 7.004s 2025-10-21 17:47:11.399 41 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Iixteg==", "port": 30124 }, { "ipAddressV4": "CoAACg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhvR0w==", "port": 30125 }, { "ipAddressV4": "CoAACQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Igmj1Q==", "port": 30126 }, { "ipAddressV4": "CoAAXQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IhwBrA==", "port": 30127 }, { "ipAddressV4": "CoAAAg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+H5iw==", "port": 30128 }, { "ipAddressV4": "CoAABg==", "port": 30128 }] }] }
node1 7.006s 2025-10-21 17:47:11.401 44 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: d479a3eefb4435eeea2b3a460d3043b4384da906e617a7bcad4dde964b87cb9229d0d76f189a4f1b03822abfa4e02f42 (root) ConsistencyTestingToolState / ill-liquid-nurse-saddle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random
node2 7.026s 2025-10-21 17:47:11.421 42 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv
node2 7.027s 2025-10-21 17:47:11.422 43 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node2 7.043s 2025-10-21 17:47:11.438 44 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: d479a3eefb4435eeea2b3a460d3043b4384da906e617a7bcad4dde964b87cb9229d0d76f189a4f1b03822abfa4e02f42 (root) ConsistencyTestingToolState / ill-liquid-nurse-saddle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random
node1 7.230s 2025-10-21 17:47:11.625 46 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node1 7.234s 2025-10-21 17:47:11.629 47 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node1 7.239s 2025-10-21 17:47:11.634 48 INFO STARTUP <<start-node-1>> ConsistencyTestingToolMain: init called in Main for node 1.
node1 7.240s 2025-10-21 17:47:11.635 49 INFO STARTUP <<start-node-1>> SwirldsPlatform: Starting platform 1
node1 7.241s 2025-10-21 17:47:11.636 50 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node1 7.245s 2025-10-21 17:47:11.640 51 INFO STARTUP <<start-node-1>> CycleFinder: No cyclical back pressure detected in wiring model.
node1 7.246s 2025-10-21 17:47:11.641 52 INFO STARTUP <<start-node-1>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node1 7.247s 2025-10-21 17:47:11.642 53 INFO STARTUP <<start-node-1>> InputWireChecks: All input wires have been bound.
node1 7.248s 2025-10-21 17:47:11.643 54 WARN STARTUP <<start-node-1>> PcesFileTracker: No preconsensus event files available
node1 7.248s 2025-10-21 17:47:11.643 55 INFO STARTUP <<start-node-1>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node1 7.250s 2025-10-21 17:47:11.645 56 INFO STARTUP <<start-node-1>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node1 7.251s 2025-10-21 17:47:11.646 57 INFO STARTUP <<app: appMain 1>> ConsistencyTestingToolMain: run called in Main.
node2 7.251s 2025-10-21 17:47:11.646 46 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node1 7.254s 2025-10-21 17:47:11.649 58 INFO PLATFORM_STATUS <platformForkJoinThread-2> DefaultStatusStateMachine: Platform spent 193.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node2 7.256s 2025-10-21 17:47:11.651 47 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node1 7.259s 2025-10-21 17:47:11.654 59 INFO PLATFORM_STATUS <platformForkJoinThread-2> DefaultStatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node2 7.262s 2025-10-21 17:47:11.657 48 INFO STARTUP <<start-node-2>> ConsistencyTestingToolMain: init called in Main for node 2.
node2 7.263s 2025-10-21 17:47:11.658 49 INFO STARTUP <<start-node-2>> SwirldsPlatform: Starting platform 2
node2 7.264s 2025-10-21 17:47:11.659 50 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node2 7.267s 2025-10-21 17:47:11.662 51 INFO STARTUP <<start-node-2>> CycleFinder: No cyclical back pressure detected in wiring model.
node2 7.268s 2025-10-21 17:47:11.663 52 INFO STARTUP <<start-node-2>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node2 7.269s 2025-10-21 17:47:11.664 53 INFO STARTUP <<start-node-2>> InputWireChecks: All input wires have been bound.
node2 7.270s 2025-10-21 17:47:11.665 54 WARN STARTUP <<start-node-2>> PcesFileTracker: No preconsensus event files available
node2 7.271s 2025-10-21 17:47:11.666 55 INFO STARTUP <<start-node-2>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node2 7.272s 2025-10-21 17:47:11.667 56 INFO STARTUP <<start-node-2>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node2 7.273s 2025-10-21 17:47:11.668 57 INFO STARTUP <<app: appMain 2>> ConsistencyTestingToolMain: run called in Main.
node2 7.274s 2025-10-21 17:47:11.669 58 INFO PLATFORM_STATUS <platformForkJoinThread-1> DefaultStatusStateMachine: Platform spent 173.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node2 7.279s 2025-10-21 17:47:11.674 59 INFO PLATFORM_STATUS <platformForkJoinThread-3> DefaultStatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 7.380s 2025-10-21 17:47:11.775 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 7.467s 2025-10-21 17:47:11.862 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 7.470s 2025-10-21 17:47:11.865 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node4 7.471s 2025-10-21 17:47:11.866 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 8.360s 2025-10-21 17:47:12.755 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 8.363s 2025-10-21 17:47:12.758 33 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 8.368s 2025-10-21 17:47:12.763 34 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node4 8.380s 2025-10-21 17:47:12.775 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 8.381s 2025-10-21 17:47:12.776 36 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 9.121s 2025-10-21 17:47:13.516 60 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ]
node0 9.124s 2025-10-21 17:47:13.519 61 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 9.470s 2025-10-21 17:47:13.865 60 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ]
node3 9.472s 2025-10-21 17:47:13.867 61 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 9.496s 2025-10-21 17:47:13.891 37 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26197773] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=221939, randomLong=-5191473768800200434, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=16200, randomLong=6418867952271751980, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1491216, data=35, exception=null] OS Health Check Report - Complete (took 1026 ms)
node4 9.533s 2025-10-21 17:47:13.928 38 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 9.542s 2025-10-21 17:47:13.937 39 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 9.548s 2025-10-21 17:47:13.943 40 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 9.651s 2025-10-21 17:47:14.046 41 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Iixteg==", "port": 30124 }, { "ipAddressV4": "CoAACg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhvR0w==", "port": 30125 }, { "ipAddressV4": "CoAACQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Igmj1Q==", "port": 30126 }, { "ipAddressV4": "CoAAXQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IhwBrA==", "port": 30127 }, { "ipAddressV4": "CoAAAg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+H5iw==", "port": 30128 }, { "ipAddressV4": "CoAABg==", "port": 30128 }] }] }
node4 9.675s 2025-10-21 17:47:14.070 42 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 9.676s 2025-10-21 17:47:14.071 43 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node4 9.695s 2025-10-21 17:47:14.090 44 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: d479a3eefb4435eeea2b3a460d3043b4384da906e617a7bcad4dde964b87cb9229d0d76f189a4f1b03822abfa4e02f42 (root) ConsistencyTestingToolState / ill-liquid-nurse-saddle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random
node4 9.907s 2025-10-21 17:47:14.302 46 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node4 9.912s 2025-10-21 17:47:14.307 47 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node4 9.918s 2025-10-21 17:47:14.313 48 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 9.919s 2025-10-21 17:47:14.314 49 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 9.920s 2025-10-21 17:47:14.315 50 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 9.924s 2025-10-21 17:47:14.319 51 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 9.926s 2025-10-21 17:47:14.321 52 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 9.926s 2025-10-21 17:47:14.321 53 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 9.928s 2025-10-21 17:47:14.323 54 WARN STARTUP <<start-node-4>> PcesFileTracker: No preconsensus event files available
node4 9.929s 2025-10-21 17:47:14.324 55 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node4 9.930s 2025-10-21 17:47:14.325 56 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node4 9.931s 2025-10-21 17:47:14.326 57 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 9.933s 2025-10-21 17:47:14.328 58 INFO PLATFORM_STATUS <platformForkJoinThread-1> DefaultStatusStateMachine: Platform spent 176.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 9.939s 2025-10-21 17:47:14.334 59 INFO PLATFORM_STATUS <platformForkJoinThread-1> DefaultStatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node1 10.255s 2025-10-21 17:47:14.650 60 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ]
node1 10.258s 2025-10-21 17:47:14.653 61 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node2 10.278s 2025-10-21 17:47:14.673 60 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ]
node2 10.281s 2025-10-21 17:47:14.676 61 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 12.930s 2025-10-21 17:47:17.325 60 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 12.933s 2025-10-21 17:47:17.328 61 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node0 16.215s 2025-10-21 17:47:20.610 62 INFO PLATFORM_STATUS <platformForkJoinThread-6> DefaultStatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node3 16.565s 2025-10-21 17:47:20.960 62 INFO PLATFORM_STATUS <platformForkJoinThread-5> DefaultStatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 17.347s 2025-10-21 17:47:21.742 62 INFO PLATFORM_STATUS <platformForkJoinThread-5> DefaultStatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node2 17.370s 2025-10-21 17:47:21.765 62 INFO PLATFORM_STATUS <platformForkJoinThread-6> DefaultStatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 18.506s 2025-10-21 17:47:22.901 64 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node0 18.520s 2025-10-21 17:47:22.915 63 INFO PLATFORM_STATUS <platformForkJoinThread-4> DefaultStatusStateMachine: Platform spent 2.3 s in CHECKING. Now in ACTIVE
node0 18.523s 2025-10-21 17:47:22.918 65 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node2 18.548s 2025-10-21 17:47:22.943 64 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node3 18.585s 2025-10-21 17:47:22.980 64 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node4 18.703s 2025-10-21 17:47:23.098 63 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node4 18.718s 2025-10-21 17:47:23.113 78 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1
node4 18.721s 2025-10-21 17:47:23.116 79 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node0 18.821s 2025-10-21 17:47:23.216 80 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1
node0 18.823s 2025-10-21 17:47:23.218 81 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node2 18.876s 2025-10-21 17:47:23.271 79 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1
node2 18.879s 2025-10-21 17:47:23.274 80 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node1 18.905s 2025-10-21 17:47:23.300 79 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1
node1 18.908s 2025-10-21 17:47:23.303 80 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node3 18.975s 2025-10-21 17:47:23.370 79 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1
node3 18.977s 2025-10-21 17:47:23.372 80 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node4 18.992s 2025-10-21 17:47:23.387 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node4 18.995s 2025-10-21 17:47:23.390 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-21T17:47:21.753497624Z Next consensus number: 1 Legacy running event hash: 89782ba7bdfc0a4079a90b0f0a4d744947e538f6ba778dbed207d3d7a0a16b563e93b2d91ab65792bae13bed8d983079 Legacy running event mnemonic: speed-insane-dog-elite Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: b125e8a888988d95e0487d041370624131ff156c117b7cdbd8bd9a17da1605f71f5f65473a4bb65a1c52fdcc0d21c66e (root) ConsistencyTestingToolState / swarm-twelve-unfair-estate 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 steak-awkward-uncover-twin 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node2 19.009s 2025-10-21 17:47:23.404 94 INFO PLATFORM_STATUS <platformForkJoinThread-2> DefaultStatusStateMachine: Platform spent 1.6 s in CHECKING. Now in ACTIVE
node4 19.030s 2025-10-21 17:47:23.425 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces
node4 19.031s 2025-10-21 17:47:23.426 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces
node4 19.031s 2025-10-21 17:47:23.426 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 19.032s 2025-10-21 17:47:23.427 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 19.039s 2025-10-21 17:47:23.434 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 19.045s 2025-10-21 17:47:23.440 94 INFO PLATFORM_STATUS <platformForkJoinThread-2> DefaultStatusStateMachine: Platform spent 2.5 s in CHECKING. Now in ACTIVE
node0 19.070s 2025-10-21 17:47:23.465 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node0 19.073s 2025-10-21 17:47:23.468 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-21T17:47:21.753497624Z Next consensus number: 1 Legacy running event hash: 89782ba7bdfc0a4079a90b0f0a4d744947e538f6ba778dbed207d3d7a0a16b563e93b2d91ab65792bae13bed8d983079 Legacy running event mnemonic: speed-insane-dog-elite Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: b125e8a888988d95e0487d041370624131ff156c117b7cdbd8bd9a17da1605f71f5f65473a4bb65a1c52fdcc0d21c66e (root) ConsistencyTestingToolState / swarm-twelve-unfair-estate 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 steak-awkward-uncover-twin 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node0 19.105s 2025-10-21 17:47:23.500 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 19.106s 2025-10-21 17:47:23.501 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 19.106s 2025-10-21 17:47:23.501 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 19.107s 2025-10-21 17:47:23.502 118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 19.112s 2025-10-21 17:47:23.507 119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 19.127s 2025-10-21 17:47:23.522 120 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node2 19.130s 2025-10-21 17:47:23.525 121 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-21T17:47:21.753497624Z Next consensus number: 1 Legacy running event hash: 89782ba7bdfc0a4079a90b0f0a4d744947e538f6ba778dbed207d3d7a0a16b563e93b2d91ab65792bae13bed8d983079 Legacy running event mnemonic: speed-insane-dog-elite Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: b125e8a888988d95e0487d041370624131ff156c117b7cdbd8bd9a17da1605f71f5f65473a4bb65a1c52fdcc0d21c66e (root) ConsistencyTestingToolState / swarm-twelve-unfair-estate 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 steak-awkward-uncover-twin 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node1 19.159s 2025-10-21 17:47:23.554 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node2 19.161s 2025-10-21 17:47:23.556 122 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node1 19.162s 2025-10-21 17:47:23.557 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-21T17:47:21.753497624Z Next consensus number: 1 Legacy running event hash: 89782ba7bdfc0a4079a90b0f0a4d744947e538f6ba778dbed207d3d7a0a16b563e93b2d91ab65792bae13bed8d983079 Legacy running event mnemonic: speed-insane-dog-elite Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: b125e8a888988d95e0487d041370624131ff156c117b7cdbd8bd9a17da1605f71f5f65473a4bb65a1c52fdcc0d21c66e (root) ConsistencyTestingToolState / swarm-twelve-unfair-estate 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 steak-awkward-uncover-twin 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node2 19.162s 2025-10-21 17:47:23.557 123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 19.162s 2025-10-21 17:47:23.557 124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 19.163s 2025-10-21 17:47:23.558 125 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 19.168s 2025-10-21 17:47:23.563 126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 19.193s 2025-10-21 17:47:23.588 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 19.194s 2025-10-21 17:47:23.589 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 19.194s 2025-10-21 17:47:23.589 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 19.195s 2025-10-21 17:47:23.590 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 19.200s 2025-10-21 17:47:23.595 118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 19.234s 2025-10-21 17:47:23.629 120 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node3 19.237s 2025-10-21 17:47:23.632 121 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-21T17:47:21.753497624Z Next consensus number: 1 Legacy running event hash: 89782ba7bdfc0a4079a90b0f0a4d744947e538f6ba778dbed207d3d7a0a16b563e93b2d91ab65792bae13bed8d983079 Legacy running event mnemonic: speed-insane-dog-elite Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: b125e8a888988d95e0487d041370624131ff156c117b7cdbd8bd9a17da1605f71f5f65473a4bb65a1c52fdcc0d21c66e (root) ConsistencyTestingToolState / swarm-twelve-unfair-estate 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 steak-awkward-uncover-twin 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node3 19.278s 2025-10-21 17:47:23.673 122 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 19.279s 2025-10-21 17:47:23.674 123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 19.279s 2025-10-21 17:47:23.674 124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 19.280s 2025-10-21 17:47:23.675 125 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 19.285s 2025-10-21 17:47:23.680 126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 19.331s 2025-10-21 17:47:23.726 120 INFO PLATFORM_STATUS <platformForkJoinThread-7> DefaultStatusStateMachine: Platform spent 2.0 s in CHECKING. Now in ACTIVE
node4 20.027s 2025-10-21 17:47:24.422 130 INFO PLATFORM_STATUS <platformForkJoinThread-6> DefaultStatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node4 22.379s 2025-10-21 17:47:26.774 162 INFO PLATFORM_STATUS <platformForkJoinThread-2> DefaultStatusStateMachine: Platform spent 2.4 s in CHECKING. Now in ACTIVE
node0 57.081s 2025-10-21 17:48:01.476 680 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 56 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 57.119s 2025-10-21 17:48:01.514 720 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 56 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 57.207s 2025-10-21 17:48:01.602 694 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 56 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 57.240s 2025-10-21 17:48:01.635 698 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 56 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 57.263s 2025-10-21 17:48:01.658 688 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 56 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 57.730s 2025-10-21 17:48:02.125 691 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 56 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/56
node1 57.730s 2025-10-21 17:48:02.125 692 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node3 57.790s 2025-10-21 17:48:02.185 701 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 56 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/56
node3 57.791s 2025-10-21 17:48:02.186 702 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node0 57.799s 2025-10-21 17:48:02.194 705 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 56 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/56
node0 57.799s 2025-10-21 17:48:02.194 706 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node1 57.811s 2025-10-21 17:48:02.206 725 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node1 57.813s 2025-10-21 17:48:02.208 726 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 56 Timestamp: 2025-10-21T17:48:00.172549Z Next consensus number: 1538 Legacy running event hash: 3eb464c1c9c9a0d0b84efc9cf9ba0bc0fa52520fa65159ff6d3dc70d4853f6842431984000ae8a03dfc5cea20c872461 Legacy running event mnemonic: story-oblige-shoulder-toward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1801234485 Root hash: 9588028227b887fc4be199f857e0da1025d0fe46fe20a34a0888108d80a7b8a63a465bd7f0ddd08ec80ea6cee43285d0 (root) ConsistencyTestingToolState / excuse-expect-warfare-above 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 summer-limit-spray-galaxy 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -5141143355958766083 /3 hill-jaguar-client-wolf 4 StringLeaf 56 /4 piece-witness-slogan-broom
node1 57.820s 2025-10-21 17:48:02.215 727 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 57.821s 2025-10-21 17:48:02.216 728 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 29 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 57.821s 2025-10-21 17:48:02.216 729 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 57.822s 2025-10-21 17:48:02.217 730 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 57.823s 2025-10-21 17:48:02.218 731 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 56 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/56 {"round":56,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/56/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 57.873s 2025-10-21 17:48:02.268 726 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 56 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/56
node2 57.874s 2025-10-21 17:48:02.269 727 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node0 57.875s 2025-10-21 17:48:02.270 739 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node3 57.875s 2025-10-21 17:48:02.270 747 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node0 57.877s 2025-10-21 17:48:02.272 740 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 56 Timestamp: 2025-10-21T17:48:00.172549Z Next consensus number: 1538 Legacy running event hash: 3eb464c1c9c9a0d0b84efc9cf9ba0bc0fa52520fa65159ff6d3dc70d4853f6842431984000ae8a03dfc5cea20c872461 Legacy running event mnemonic: story-oblige-shoulder-toward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1801234485 Root hash: 9588028227b887fc4be199f857e0da1025d0fe46fe20a34a0888108d80a7b8a63a465bd7f0ddd08ec80ea6cee43285d0 (root) ConsistencyTestingToolState / excuse-expect-warfare-above 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 summer-limit-spray-galaxy 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -5141143355958766083 /3 hill-jaguar-client-wolf 4 StringLeaf 56 /4 piece-witness-slogan-broom
node3 57.877s 2025-10-21 17:48:02.272 748 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 56 Timestamp: 2025-10-21T17:48:00.172549Z Next consensus number: 1538 Legacy running event hash: 3eb464c1c9c9a0d0b84efc9cf9ba0bc0fa52520fa65159ff6d3dc70d4853f6842431984000ae8a03dfc5cea20c872461 Legacy running event mnemonic: story-oblige-shoulder-toward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1801234485 Root hash: 9588028227b887fc4be199f857e0da1025d0fe46fe20a34a0888108d80a7b8a63a465bd7f0ddd08ec80ea6cee43285d0 (root) ConsistencyTestingToolState / excuse-expect-warfare-above 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 summer-limit-spray-galaxy 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -5141143355958766083 /3 hill-jaguar-client-wolf 4 StringLeaf 56 /4 piece-witness-slogan-broom
node3 57.886s 2025-10-21 17:48:02.281 749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 57.886s 2025-10-21 17:48:02.281 750 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 29 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 57.886s 2025-10-21 17:48:02.281 751 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 57.887s 2025-10-21 17:48:02.282 744 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 57.887s 2025-10-21 17:48:02.282 745 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 29 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 57.887s 2025-10-21 17:48:02.282 746 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 57.888s 2025-10-21 17:48:02.283 747 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 57.888s 2025-10-21 17:48:02.283 752 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 57.888s 2025-10-21 17:48:02.283 753 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 56 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/56 {"round":56,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/56/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 57.889s 2025-10-21 17:48:02.284 748 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 56 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/56 {"round":56,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/56/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 57.889s 2025-10-21 17:48:02.284 697 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 56 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/56
node4 57.890s 2025-10-21 17:48:02.285 698 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node2 57.963s 2025-10-21 17:48:02.358 765 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node2 57.966s 2025-10-21 17:48:02.361 766 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 56 Timestamp: 2025-10-21T17:48:00.172549Z Next consensus number: 1538 Legacy running event hash: 3eb464c1c9c9a0d0b84efc9cf9ba0bc0fa52520fa65159ff6d3dc70d4853f6842431984000ae8a03dfc5cea20c872461 Legacy running event mnemonic: story-oblige-shoulder-toward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1801234485 Root hash: 9588028227b887fc4be199f857e0da1025d0fe46fe20a34a0888108d80a7b8a63a465bd7f0ddd08ec80ea6cee43285d0 (root) ConsistencyTestingToolState / excuse-expect-warfare-above 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 summer-limit-spray-galaxy 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -5141143355958766083 /3 hill-jaguar-client-wolf 4 StringLeaf 56 /4 piece-witness-slogan-broom
node2 57.975s 2025-10-21 17:48:02.370 767 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 57.976s 2025-10-21 17:48:02.371 768 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 29 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 57.976s 2025-10-21 17:48:02.371 769 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 57.977s 2025-10-21 17:48:02.372 770 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 57.978s 2025-10-21 17:48:02.373 771 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 56 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/56 {"round":56,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/56/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 57.981s 2025-10-21 17:48:02.376 743 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 56
node4 57.985s 2025-10-21 17:48:02.380 744 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 56 Timestamp: 2025-10-21T17:48:00.172549Z Next consensus number: 1538 Legacy running event hash: 3eb464c1c9c9a0d0b84efc9cf9ba0bc0fa52520fa65159ff6d3dc70d4853f6842431984000ae8a03dfc5cea20c872461 Legacy running event mnemonic: story-oblige-shoulder-toward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1801234485 Root hash: 9588028227b887fc4be199f857e0da1025d0fe46fe20a34a0888108d80a7b8a63a465bd7f0ddd08ec80ea6cee43285d0 (root) ConsistencyTestingToolState / excuse-expect-warfare-above 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 summer-limit-spray-galaxy 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -5141143355958766083 /3 hill-jaguar-client-wolf 4 StringLeaf 56 /4 piece-witness-slogan-broom
node4 57.997s 2025-10-21 17:48:02.392 745 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces
node4 57.998s 2025-10-21 17:48:02.393 746 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 29 File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces
node4 57.998s 2025-10-21 17:48:02.393 747 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 58.000s 2025-10-21 17:48:02.395 748 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 58.001s 2025-10-21 17:48:02.396 749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 56 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/56 {"round":56,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/56/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 56.861s 2025-10-21 17:49:01.256 1868 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 152 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 1m 56.913s 2025-10-21 17:49:01.308 1822 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 152 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 1m 56.947s 2025-10-21 17:49:01.342 1846 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 152 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 1m 56.991s 2025-10-21 17:49:01.386 1838 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 152 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 57.000s 2025-10-21 17:49:01.395 1848 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 152 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 1m 57.205s 2025-10-21 17:49:01.600 1841 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 152 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/152
node4 1m 57.205s 2025-10-21 17:49:01.600 1842 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node4 1m 57.298s 2025-10-21 17:49:01.693 1875 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node4 1m 57.300s 2025-10-21 17:49:01.695 1876 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 152 Timestamp: 2025-10-21T17:49:00.050003471Z Next consensus number: 4083 Legacy running event hash: c7331aef7ce0be1b7d7c94a7f0218efdc5edc300beb4f2df591e6eece8864012bc62a12f7b88f07e8d63810af247bebb Legacy running event mnemonic: boost-fuel-orchard-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 144280300 Root hash: eba40648bfbaa8818ff37e28cf30a9aecaef098c0a1d3171b97efad971463ef198dcbc4d9db387a0de6c088e01fbdbf9 (root) ConsistencyTestingToolState / merit-simple-angle-useful 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shoot-lucky-someone-trip 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1514865893907368508 /3 flag-benefit-tip-define 4 StringLeaf 152 /4 cement-fame-near-border
node4 1m 57.308s 2025-10-21 17:49:01.703 1877 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 57.308s 2025-10-21 17:49:01.703 1878 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 125 File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 57.309s 2025-10-21 17:49:01.704 1879 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 1m 57.312s 2025-10-21 17:49:01.707 1880 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 1m 57.312s 2025-10-21 17:49:01.707 1881 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 152 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/152 {"round":152,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/152/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 1m 57.358s 2025-10-21 17:49:01.753 1849 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 152 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/152
node2 1m 57.358s 2025-10-21 17:49:01.753 1850 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node0 1m 57.394s 2025-10-21 17:49:01.789 1881 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 152 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/152
node0 1m 57.394s 2025-10-21 17:49:01.789 1882 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node3 1m 57.422s 2025-10-21 17:49:01.817 1851 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 152 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/152
node3 1m 57.422s 2025-10-21 17:49:01.817 1852 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node2 1m 57.446s 2025-10-21 17:49:01.841 1883 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node2 1m 57.449s 2025-10-21 17:49:01.844 1884 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 152 Timestamp: 2025-10-21T17:49:00.050003471Z Next consensus number: 4083 Legacy running event hash: c7331aef7ce0be1b7d7c94a7f0218efdc5edc300beb4f2df591e6eece8864012bc62a12f7b88f07e8d63810af247bebb Legacy running event mnemonic: boost-fuel-orchard-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 144280300 Root hash: eba40648bfbaa8818ff37e28cf30a9aecaef098c0a1d3171b97efad971463ef198dcbc4d9db387a0de6c088e01fbdbf9 (root) ConsistencyTestingToolState / merit-simple-angle-useful 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shoot-lucky-someone-trip 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1514865893907368508 /3 flag-benefit-tip-define 4 StringLeaf 152 /4 cement-fame-near-border
node1 1m 57.454s 2025-10-21 17:49:01.849 1825 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 152 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/152
node1 1m 57.454s 2025-10-21 17:49:01.849 1826 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node2 1m 57.455s 2025-10-21 17:49:01.850 1885 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 57.455s 2025-10-21 17:49:01.850 1886 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 125 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 57.455s 2025-10-21 17:49:01.850 1887 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 1m 57.459s 2025-10-21 17:49:01.854 1888 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 1m 57.459s 2025-10-21 17:49:01.854 1889 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 152 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/152 {"round":152,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/152/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 57.482s 2025-10-21 17:49:01.877 1923 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node0 1m 57.484s 2025-10-21 17:49:01.879 1924 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 152 Timestamp: 2025-10-21T17:49:00.050003471Z Next consensus number: 4083 Legacy running event hash: c7331aef7ce0be1b7d7c94a7f0218efdc5edc300beb4f2df591e6eece8864012bc62a12f7b88f07e8d63810af247bebb Legacy running event mnemonic: boost-fuel-orchard-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 144280300 Root hash: eba40648bfbaa8818ff37e28cf30a9aecaef098c0a1d3171b97efad971463ef198dcbc4d9db387a0de6c088e01fbdbf9 (root) ConsistencyTestingToolState / merit-simple-angle-useful 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shoot-lucky-someone-trip 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1514865893907368508 /3 flag-benefit-tip-define 4 StringLeaf 152 /4 cement-fame-near-border
node0 1m 57.494s 2025-10-21 17:49:01.889 1925 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 57.494s 2025-10-21 17:49:01.889 1926 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 125 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 57.494s 2025-10-21 17:49:01.889 1927 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 1m 57.498s 2025-10-21 17:49:01.893 1928 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 1m 57.498s 2025-10-21 17:49:01.893 1929 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 152 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/152 {"round":152,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/152/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 1m 57.518s 2025-10-21 17:49:01.913 1897 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node3 1m 57.520s 2025-10-21 17:49:01.915 1898 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 152 Timestamp: 2025-10-21T17:49:00.050003471Z Next consensus number: 4083 Legacy running event hash: c7331aef7ce0be1b7d7c94a7f0218efdc5edc300beb4f2df591e6eece8864012bc62a12f7b88f07e8d63810af247bebb Legacy running event mnemonic: boost-fuel-orchard-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 144280300 Root hash: eba40648bfbaa8818ff37e28cf30a9aecaef098c0a1d3171b97efad971463ef198dcbc4d9db387a0de6c088e01fbdbf9 (root) ConsistencyTestingToolState / merit-simple-angle-useful 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shoot-lucky-someone-trip 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1514865893907368508 /3 flag-benefit-tip-define 4 StringLeaf 152 /4 cement-fame-near-border
node3 1m 57.528s 2025-10-21 17:49:01.923 1899 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 57.529s 2025-10-21 17:49:01.924 1900 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 125 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 57.529s 2025-10-21 17:49:01.924 1901 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 1m 57.532s 2025-10-21 17:49:01.927 1902 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 1m 57.532s 2025-10-21 17:49:01.927 1903 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 152 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/152 {"round":152,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/152/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 1m 57.540s 2025-10-21 17:49:01.935 1867 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 152
node1 1m 57.543s 2025-10-21 17:49:01.938 1868 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 152 Timestamp: 2025-10-21T17:49:00.050003471Z Next consensus number: 4083 Legacy running event hash: c7331aef7ce0be1b7d7c94a7f0218efdc5edc300beb4f2df591e6eece8864012bc62a12f7b88f07e8d63810af247bebb Legacy running event mnemonic: boost-fuel-orchard-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 144280300 Root hash: eba40648bfbaa8818ff37e28cf30a9aecaef098c0a1d3171b97efad971463ef198dcbc4d9db387a0de6c088e01fbdbf9 (root) ConsistencyTestingToolState / merit-simple-angle-useful 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shoot-lucky-someone-trip 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1514865893907368508 /3 flag-benefit-tip-define 4 StringLeaf 152 /4 cement-fame-near-border
node1 1m 57.550s 2025-10-21 17:49:01.945 1869 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 57.550s 2025-10-21 17:49:01.945 1870 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 125 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 57.550s 2025-10-21 17:49:01.945 1871 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 1m 57.553s 2025-10-21 17:49:01.948 1872 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 1m 57.554s 2025-10-21 17:49:01.949 1873 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 152 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/152 {"round":152,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/152/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 2m 56.865s 2025-10-21 17:50:01.260 3050 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 260 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 2m 56.921s 2025-10-21 17:50:01.316 3070 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 260 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 2m 56.950s 2025-10-21 17:50:01.345 3106 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 260 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 2m 56.997s 2025-10-21 17:50:01.392 3108 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 260 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 2m 57.035s 2025-10-21 17:50:01.430 3080 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 260 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 2m 57.266s 2025-10-21 17:50:01.661 3069 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 260 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/260
node1 2m 57.266s 2025-10-21 17:50:01.661 3070 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node2 2m 57.336s 2025-10-21 17:50:01.731 3111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 260 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/260
node2 2m 57.337s 2025-10-21 17:50:01.732 3112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node1 2m 57.357s 2025-10-21 17:50:01.752 3103 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node1 2m 57.358s 2025-10-21 17:50:01.753 3104 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 260 Timestamp: 2025-10-21T17:50:00.499448Z Next consensus number: 6602 Legacy running event hash: 1e676717710e2ed70e694326c9e684ef92213f84006bdda9fc51f1022a1909916417e70fdb02448b1183b0f2b05488fa Legacy running event mnemonic: smooth-title-canyon-sustain Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1613638831 Root hash: fc28b562ac0c0fc2a3195330bb8d9eb3315a180e6e5c37293004f98a61805cccf25b44b9a9746cf0a71d03f5c2b07e9f (root) ConsistencyTestingToolState / kitchen-fatigue-leader-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spray-segment-dose-infant 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 4579450462933924634 /3 also-year-jelly-concert 4 StringLeaf 260 /4 lawn-delay-virtual-fine
node1 2m 57.365s 2025-10-21 17:50:01.760 3105 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 57.366s 2025-10-21 17:50:01.761 3106 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 232 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 57.366s 2025-10-21 17:50:01.761 3107 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 2m 57.371s 2025-10-21 17:50:01.766 3108 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 2m 57.371s 2025-10-21 17:50:01.766 3109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 260 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/260 {"round":260,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/260/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 2m 57.429s 2025-10-21 17:50:01.824 3145 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node2 2m 57.431s 2025-10-21 17:50:01.826 3146 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 260 Timestamp: 2025-10-21T17:50:00.499448Z Next consensus number: 6602 Legacy running event hash: 1e676717710e2ed70e694326c9e684ef92213f84006bdda9fc51f1022a1909916417e70fdb02448b1183b0f2b05488fa Legacy running event mnemonic: smooth-title-canyon-sustain Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1613638831 Root hash: fc28b562ac0c0fc2a3195330bb8d9eb3315a180e6e5c37293004f98a61805cccf25b44b9a9746cf0a71d03f5c2b07e9f (root) ConsistencyTestingToolState / kitchen-fatigue-leader-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spray-segment-dose-infant 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 4579450462933924634 /3 also-year-jelly-concert 4 StringLeaf 260 /4 lawn-delay-virtual-fine
node2 2m 57.437s 2025-10-21 17:50:01.832 3147 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 57.437s 2025-10-21 17:50:01.832 3148 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 232 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 57.437s 2025-10-21 17:50:01.832 3149 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 2m 57.442s 2025-10-21 17:50:01.837 3150 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 2m 57.442s 2025-10-21 17:50:01.837 3151 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 260 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/260 {"round":260,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/260/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 57.463s 2025-10-21 17:50:01.858 3083 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 260 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/260
node4 2m 57.464s 2025-10-21 17:50:01.859 3084 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node3 2m 57.475s 2025-10-21 17:50:01.870 3089 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 260 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/260
node3 2m 57.476s 2025-10-21 17:50:01.871 3090 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node0 2m 57.512s 2025-10-21 17:50:01.907 3125 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 260 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/260
node0 2m 57.513s 2025-10-21 17:50:01.908 3126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node4 2m 57.559s 2025-10-21 17:50:01.954 3121 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node4 2m 57.562s 2025-10-21 17:50:01.957 3122 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 260 Timestamp: 2025-10-21T17:50:00.499448Z Next consensus number: 6602 Legacy running event hash: 1e676717710e2ed70e694326c9e684ef92213f84006bdda9fc51f1022a1909916417e70fdb02448b1183b0f2b05488fa Legacy running event mnemonic: smooth-title-canyon-sustain Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1613638831 Root hash: fc28b562ac0c0fc2a3195330bb8d9eb3315a180e6e5c37293004f98a61805cccf25b44b9a9746cf0a71d03f5c2b07e9f (root) ConsistencyTestingToolState / kitchen-fatigue-leader-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spray-segment-dose-infant 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 4579450462933924634 /3 also-year-jelly-concert 4 StringLeaf 260 /4 lawn-delay-virtual-fine
node3 2m 57.563s 2025-10-21 17:50:01.958 3131 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node3 2m 57.565s 2025-10-21 17:50:01.960 3132 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 260 Timestamp: 2025-10-21T17:50:00.499448Z Next consensus number: 6602 Legacy running event hash: 1e676717710e2ed70e694326c9e684ef92213f84006bdda9fc51f1022a1909916417e70fdb02448b1183b0f2b05488fa Legacy running event mnemonic: smooth-title-canyon-sustain Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1613638831 Root hash: fc28b562ac0c0fc2a3195330bb8d9eb3315a180e6e5c37293004f98a61805cccf25b44b9a9746cf0a71d03f5c2b07e9f (root) ConsistencyTestingToolState / kitchen-fatigue-leader-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spray-segment-dose-infant 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 4579450462933924634 /3 also-year-jelly-concert 4 StringLeaf 260 /4 lawn-delay-virtual-fine
node4 2m 57.571s 2025-10-21 17:50:01.966 3123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 57.571s 2025-10-21 17:50:01.966 3124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 232 File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 57.572s 2025-10-21 17:50:01.967 3133 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 57.572s 2025-10-21 17:50:01.967 3134 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 232 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 57.572s 2025-10-21 17:50:01.967 3135 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 2m 57.572s 2025-10-21 17:50:01.967 3125 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 2m 57.577s 2025-10-21 17:50:01.972 3136 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 2m 57.577s 2025-10-21 17:50:01.972 3126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 2m 57.577s 2025-10-21 17:50:01.972 3127 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 260 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/260 {"round":260,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/260/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 2m 57.578s 2025-10-21 17:50:01.973 3137 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 260 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/260 {"round":260,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/260/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 2m 57.606s 2025-10-21 17:50:02.001 3171 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/16 for round 260
node0 2m 57.608s 2025-10-21 17:50:02.003 3172 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 260 Timestamp: 2025-10-21T17:50:00.499448Z Next consensus number: 6602 Legacy running event hash: 1e676717710e2ed70e694326c9e684ef92213f84006bdda9fc51f1022a1909916417e70fdb02448b1183b0f2b05488fa Legacy running event mnemonic: smooth-title-canyon-sustain Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1613638831 Root hash: fc28b562ac0c0fc2a3195330bb8d9eb3315a180e6e5c37293004f98a61805cccf25b44b9a9746cf0a71d03f5c2b07e9f (root) ConsistencyTestingToolState / kitchen-fatigue-leader-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spray-segment-dose-infant 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 4579450462933924634 /3 also-year-jelly-concert 4 StringLeaf 260 /4 lawn-delay-virtual-fine
node0 2m 57.616s 2025-10-21 17:50:02.011 3173 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 57.616s 2025-10-21 17:50:02.011 3174 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 232 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 57.616s 2025-10-21 17:50:02.011 3175 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 2m 57.621s 2025-10-21 17:50:02.016 3176 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 2m 57.622s 2025-10-21 17:50:02.017 3177 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 260 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/260 {"round":260,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/260/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 3m 13.314s 2025-10-21 17:50:17.709 3430 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 3 to 4>> NetworkUtils: Connection broken: 3 -> 4
java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.FilterInputStream.read(FilterInputStream.java:71) at org.hiero.base.io.streams.AugmentedDataInputStream.read(AugmentedDataInputStream.java:57) at com.swirlds.platform.network.communication.states.SentKeepalive.transition(SentKeepalive.java:44) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583)
node1 3m 13.315s 2025-10-21 17:50:17.710 3408 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 1 to 4>> NetworkUtils: Connection broken: 1 -> 4
java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:50:17.708030421Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:246) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:50:17.708030421Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:145) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:279) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:217) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:184) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:105) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:243) ... 6 more Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:150) ... 11 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$sendEventsTheyNeed$8(SyncUtils.java:228) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 1 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:272) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:143) ... 11 more
node2 3m 13.315s 2025-10-21 17:50:17.710 3462 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 2 to 4>> NetworkUtils: Connection broken: 2 -> 4
java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:50:17.708360603Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:246) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:50:17.708360603Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:145) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:279) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:137) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:105) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:243) ... 6 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readLong(DataInputStream.java:407) at org.hiero.base.io.streams.AugmentedDataInputStream.readLong(AugmentedDataInputStream.java:186) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.deserializeEventWindow(SyncUtils.java:623) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readTheirTipsAndEventWindow$3(SyncUtils.java:104) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:143) ... 10 more
node0 3m 13.319s 2025-10-21 17:50:17.714 3470 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 0 to 4>> NetworkUtils: Connection broken: 0 -> 4
java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:50:17.710658043Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:246) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:50:17.710658043Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:145) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:279) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:217) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:184) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:105) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:243) ... 6 more Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:150) ... 11 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$sendEventsTheyNeed$8(SyncUtils.java:228) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 1 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:272) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:143) ... 11 more
node1 3m 57.324s 2025-10-21 17:51:01.719 4143 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 354 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 3m 57.356s 2025-10-21 17:51:01.751 4217 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 354 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 3m 57.381s 2025-10-21 17:51:01.776 4269 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 354 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 3m 57.390s 2025-10-21 17:51:01.785 4209 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 354 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 3m 57.861s 2025-10-21 17:51:02.256 4156 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 354 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/354
node1 3m 57.862s 2025-10-21 17:51:02.257 4157 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/21 for round 354
node3 3m 57.944s 2025-10-21 17:51:02.339 4212 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 354 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/354
node3 3m 57.945s 2025-10-21 17:51:02.340 4213 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/21 for round 354
node1 3m 57.948s 2025-10-21 17:51:02.343 4198 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/21 for round 354
node1 3m 57.950s 2025-10-21 17:51:02.345 4199 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 354 Timestamp: 2025-10-21T17:51:00.359961Z Next consensus number: 8436 Legacy running event hash: 89f041ebf478bb601f07bf39df07993a2fcfd2b5f2087787b9bedbbb32b05a6808546fd62624070134d213c0973167c4 Legacy running event mnemonic: search-dice-illegal-neglect Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -145638187 Root hash: 8ee5564cfb8ecccb125563c034ff6ee633665fa7d39d70a9088b4aa478438beb12f0a14c052df974aab22c2fa2bd51b3 (root) ConsistencyTestingToolState / casino-sister-mixture-grit 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 noodle-people-panel-bring 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 7865463118761944725 /3 bid-swim-brain-crunch 4 StringLeaf 354 /4 track-recycle-wasp-coach
node1 3m 57.957s 2025-10-21 17:51:02.352 4200 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 57.957s 2025-10-21 17:51:02.352 4201 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 327 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 57.958s 2025-10-21 17:51:02.353 4202 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 3m 57.964s 2025-10-21 17:51:02.359 4203 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 3m 57.964s 2025-10-21 17:51:02.359 4204 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 354 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/354 {"round":354,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/354/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 3m 58.008s 2025-10-21 17:51:02.403 4220 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 354 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/354
node2 3m 58.009s 2025-10-21 17:51:02.404 4221 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/21 for round 354
node3 3m 58.025s 2025-10-21 17:51:02.420 4246 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/21 for round 354
node3 3m 58.027s 2025-10-21 17:51:02.422 4247 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 354 Timestamp: 2025-10-21T17:51:00.359961Z Next consensus number: 8436 Legacy running event hash: 89f041ebf478bb601f07bf39df07993a2fcfd2b5f2087787b9bedbbb32b05a6808546fd62624070134d213c0973167c4 Legacy running event mnemonic: search-dice-illegal-neglect Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -145638187 Root hash: 8ee5564cfb8ecccb125563c034ff6ee633665fa7d39d70a9088b4aa478438beb12f0a14c052df974aab22c2fa2bd51b3 (root) ConsistencyTestingToolState / casino-sister-mixture-grit 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 noodle-people-panel-bring 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 7865463118761944725 /3 bid-swim-brain-crunch 4 StringLeaf 354 /4 track-recycle-wasp-coach
node3 3m 58.034s 2025-10-21 17:51:02.429 4248 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 58.034s 2025-10-21 17:51:02.429 4249 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 327 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 58.035s 2025-10-21 17:51:02.430 4250 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 3m 58.041s 2025-10-21 17:51:02.436 4251 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 3m 58.041s 2025-10-21 17:51:02.436 4252 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 354 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/354 {"round":354,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/354/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 3m 58.054s 2025-10-21 17:51:02.449 4272 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 354 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/354
node0 3m 58.055s 2025-10-21 17:51:02.450 4273 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/21 for round 354
node2 3m 58.106s 2025-10-21 17:51:02.501 4258 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/21 for round 354
node2 3m 58.108s 2025-10-21 17:51:02.503 4259 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 354 Timestamp: 2025-10-21T17:51:00.359961Z Next consensus number: 8436 Legacy running event hash: 89f041ebf478bb601f07bf39df07993a2fcfd2b5f2087787b9bedbbb32b05a6808546fd62624070134d213c0973167c4 Legacy running event mnemonic: search-dice-illegal-neglect Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -145638187 Root hash: 8ee5564cfb8ecccb125563c034ff6ee633665fa7d39d70a9088b4aa478438beb12f0a14c052df974aab22c2fa2bd51b3 (root) ConsistencyTestingToolState / casino-sister-mixture-grit 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 noodle-people-panel-bring 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 7865463118761944725 /3 bid-swim-brain-crunch 4 StringLeaf 354 /4 track-recycle-wasp-coach
node2 3m 58.115s 2025-10-21 17:51:02.510 4260 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 58.115s 2025-10-21 17:51:02.510 4261 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 327 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 58.115s 2025-10-21 17:51:02.510 4262 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 3m 58.122s 2025-10-21 17:51:02.517 4263 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 3m 58.122s 2025-10-21 17:51:02.517 4264 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 354 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/354 {"round":354,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/354/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 3m 58.142s 2025-10-21 17:51:02.537 4310 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/21 for round 354
node0 3m 58.144s 2025-10-21 17:51:02.539 4311 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 354 Timestamp: 2025-10-21T17:51:00.359961Z Next consensus number: 8436 Legacy running event hash: 89f041ebf478bb601f07bf39df07993a2fcfd2b5f2087787b9bedbbb32b05a6808546fd62624070134d213c0973167c4 Legacy running event mnemonic: search-dice-illegal-neglect Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -145638187 Root hash: 8ee5564cfb8ecccb125563c034ff6ee633665fa7d39d70a9088b4aa478438beb12f0a14c052df974aab22c2fa2bd51b3 (root) ConsistencyTestingToolState / casino-sister-mixture-grit 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 noodle-people-panel-bring 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 7865463118761944725 /3 bid-swim-brain-crunch 4 StringLeaf 354 /4 track-recycle-wasp-coach
node0 3m 58.150s 2025-10-21 17:51:02.545 4312 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 58.151s 2025-10-21 17:51:02.546 4313 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 327 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 58.151s 2025-10-21 17:51:02.546 4314 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 3m 58.157s 2025-10-21 17:51:02.552 4315 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 3m 58.157s 2025-10-21 17:51:02.552 4316 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 354 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/354 {"round":354,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/354/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 4m 57.436s 2025-10-21 17:52:01.831 5308 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 445 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 4m 57.491s 2025-10-21 17:52:01.886 5366 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 445 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 4m 57.551s 2025-10-21 17:52:01.946 5304 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 445 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 4m 57.553s 2025-10-21 17:52:01.948 5266 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 445 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 4m 57.839s 2025-10-21 17:52:02.234 5307 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 445 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/445
node3 4m 57.840s 2025-10-21 17:52:02.235 5308 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/26 for round 445
node0 4m 57.916s 2025-10-21 17:52:02.311 5379 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 445 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/445
node0 4m 57.917s 2025-10-21 17:52:02.312 5380 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/26 for round 445
node3 4m 57.929s 2025-10-21 17:52:02.324 5345 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/26 for round 445
node3 4m 57.931s 2025-10-21 17:52:02.326 5346 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 445 Timestamp: 2025-10-21T17:52:00.539213Z Next consensus number: 9982 Legacy running event hash: ae42051cee3b9d920a6e122d83743189c2b5de5a3e0bfd57c9c02d93b24e83b746a530d5f97aa57801db9db43a159c36 Legacy running event mnemonic: daughter-phone-december-evidence Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -218991225 Root hash: 6a5c0b2fa3748740eca9f8011f231dad19a18a3693bef96e9316a3e26af63f90f424fd30288344e9e2f1043087dc2f37 (root) ConsistencyTestingToolState / entire-grass-muscle-blossom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hero-move-advice-arrange 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1308234804660663101 /3 surge-sell-sunset-wasp 4 StringLeaf 445 /4 enter-debate-dwarf-purse
node3 4m 57.939s 2025-10-21 17:52:02.334 5347 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 4m 57.939s 2025-10-21 17:52:02.334 5348 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 418 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 4m 57.939s 2025-10-21 17:52:02.334 5349 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 4m 57.946s 2025-10-21 17:52:02.341 5350 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 4m 57.947s 2025-10-21 17:52:02.342 5351 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 445 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/445 {"round":445,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/445/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 4m 57.948s 2025-10-21 17:52:02.343 5352 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1
node0 4m 58.044s 2025-10-21 17:52:02.439 5421 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/26 for round 445
node2 4m 58.044s 2025-10-21 17:52:02.439 5311 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 445 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/445
node2 4m 58.045s 2025-10-21 17:52:02.440 5312 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/26 for round 445
node0 4m 58.046s 2025-10-21 17:52:02.441 5422 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 445 Timestamp: 2025-10-21T17:52:00.539213Z Next consensus number: 9982 Legacy running event hash: ae42051cee3b9d920a6e122d83743189c2b5de5a3e0bfd57c9c02d93b24e83b746a530d5f97aa57801db9db43a159c36 Legacy running event mnemonic: daughter-phone-december-evidence Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -218991225 Root hash: 6a5c0b2fa3748740eca9f8011f231dad19a18a3693bef96e9316a3e26af63f90f424fd30288344e9e2f1043087dc2f37 (root) ConsistencyTestingToolState / entire-grass-muscle-blossom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hero-move-advice-arrange 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1308234804660663101 /3 surge-sell-sunset-wasp 4 StringLeaf 445 /4 enter-debate-dwarf-purse
node0 4m 58.053s 2025-10-21 17:52:02.448 5423 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 4m 58.053s 2025-10-21 17:52:02.448 5424 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 418 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces
node0 4m 58.053s 2025-10-21 17:52:02.448 5425 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 4m 58.061s 2025-10-21 17:52:02.456 5426 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 4m 58.061s 2025-10-21 17:52:02.456 5427 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 445 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/445 {"round":445,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/445/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 4m 58.063s 2025-10-21 17:52:02.458 5428 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1
node1 4m 58.115s 2025-10-21 17:52:02.510 5269 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 445 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/445
node1 4m 58.115s 2025-10-21 17:52:02.510 5270 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/26 for round 445
node2 4m 58.151s 2025-10-21 17:52:02.546 5357 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/26 for round 445
node2 4m 58.153s 2025-10-21 17:52:02.548 5358 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 445 Timestamp: 2025-10-21T17:52:00.539213Z Next consensus number: 9982 Legacy running event hash: ae42051cee3b9d920a6e122d83743189c2b5de5a3e0bfd57c9c02d93b24e83b746a530d5f97aa57801db9db43a159c36 Legacy running event mnemonic: daughter-phone-december-evidence Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -218991225 Root hash: 6a5c0b2fa3748740eca9f8011f231dad19a18a3693bef96e9316a3e26af63f90f424fd30288344e9e2f1043087dc2f37 (root) ConsistencyTestingToolState / entire-grass-muscle-blossom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hero-move-advice-arrange 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1308234804660663101 /3 surge-sell-sunset-wasp 4 StringLeaf 445 /4 enter-debate-dwarf-purse
node2 4m 58.159s 2025-10-21 17:52:02.554 5359 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 4m 58.159s 2025-10-21 17:52:02.554 5360 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 418 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces
node2 4m 58.159s 2025-10-21 17:52:02.554 5361 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 4m 58.167s 2025-10-21 17:52:02.562 5362 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 4m 58.167s 2025-10-21 17:52:02.562 5363 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 445 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/445 {"round":445,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/445/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 4m 58.168s 2025-10-21 17:52:02.563 5364 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1
node1 4m 58.199s 2025-10-21 17:52:02.594 5311 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/26 for round 445
node1 4m 58.201s 2025-10-21 17:52:02.596 5312 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 445 Timestamp: 2025-10-21T17:52:00.539213Z Next consensus number: 9982 Legacy running event hash: ae42051cee3b9d920a6e122d83743189c2b5de5a3e0bfd57c9c02d93b24e83b746a530d5f97aa57801db9db43a159c36 Legacy running event mnemonic: daughter-phone-december-evidence Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -218991225 Root hash: 6a5c0b2fa3748740eca9f8011f231dad19a18a3693bef96e9316a3e26af63f90f424fd30288344e9e2f1043087dc2f37 (root) ConsistencyTestingToolState / entire-grass-muscle-blossom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hero-move-advice-arrange 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -1308234804660663101 /3 surge-sell-sunset-wasp 4 StringLeaf 445 /4 enter-debate-dwarf-purse
node1 4m 58.207s 2025-10-21 17:52:02.602 5313 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 4m 58.208s 2025-10-21 17:52:02.603 5314 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 418 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 4m 58.208s 2025-10-21 17:52:02.603 5315 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 4m 58.215s 2025-10-21 17:52:02.610 5316 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 4m 58.216s 2025-10-21 17:52:02.611 5317 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 445 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/445 {"round":445,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/445/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 4m 58.217s 2025-10-21 17:52:02.612 5318 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1
node4 5m 56.328s 2025-10-21 17:53:00.723 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 5m 56.449s 2025-10-21 17:53:00.844 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 5m 56.473s 2025-10-21 17:53:00.868 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 56.631s 2025-10-21 17:53:01.026 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 5m 56.639s 2025-10-21 17:53:01.034 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node4 5m 56.658s 2025-10-21 17:53:01.053 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 5m 57.003s 2025-10-21 17:53:01.398 6310 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 534 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 5m 57.021s 2025-10-21 17:53:01.416 6370 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 534 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 5m 57.166s 2025-10-21 17:53:01.561 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node4 5m 57.167s 2025-10-21 17:53:01.562 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node2 5m 57.189s 2025-10-21 17:53:01.584 6392 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 534 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 5m 57.204s 2025-10-21 17:53:01.599 6470 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 534 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 5m 57.268s 2025-10-21 17:53:01.663 6475 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 534 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/534
node0 5m 57.269s 2025-10-21 17:53:01.664 6476 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 534
node0 5m 57.352s 2025-10-21 17:53:01.747 6519 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 534
node0 5m 57.353s 2025-10-21 17:53:01.748 6520 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 534 Timestamp: 2025-10-21T17:53:00.414510Z Next consensus number: 11490 Legacy running event hash: a7d27c8c7ec385c5f1c2ea2ffc50c732ea1cad4e97768a933aba6ca2ccefaf03fa48e7eea49acc6a916763e11d3d4b28 Legacy running event mnemonic: alarm-muscle-such-electric Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -38090079 Root hash: 2011e2c62f5c88ff8f96d8e258da508f38f8ae9704a45af390a3b03385537fa392b672a196fd53f2ee2de23818f7fd44 (root) ConsistencyTestingToolState / acquire-point-twist-then 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 then-purse-whip-snow 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 9199473051394834431 /3 choose-trip-seven-gap 4 StringLeaf 534 /4 kind-book-lady-group
node0 5m 57.359s 2025-10-21 17:53:01.754 6521 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+52+39.364097322Z_seq1_minr474_maxr5474_orgn0.pces
node0 5m 57.359s 2025-10-21 17:53:01.754 6522 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 507 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+52+39.364097322Z_seq1_minr474_maxr5474_orgn0.pces
node0 5m 57.360s 2025-10-21 17:53:01.755 6523 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 5m 57.360s 2025-10-21 17:53:01.755 6524 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 5m 57.361s 2025-10-21 17:53:01.756 6525 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 534 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/534 {"round":534,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/534/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 5m 57.362s 2025-10-21 17:53:01.757 6526 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/56
node2 5m 57.423s 2025-10-21 17:53:01.818 6397 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 534 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/534
node2 5m 57.423s 2025-10-21 17:53:01.818 6398 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 534
node2 5m 57.520s 2025-10-21 17:53:01.915 6441 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 534
node2 5m 57.522s 2025-10-21 17:53:01.917 6442 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 534 Timestamp: 2025-10-21T17:53:00.414510Z Next consensus number: 11490 Legacy running event hash: a7d27c8c7ec385c5f1c2ea2ffc50c732ea1cad4e97768a933aba6ca2ccefaf03fa48e7eea49acc6a916763e11d3d4b28 Legacy running event mnemonic: alarm-muscle-such-electric Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -38090079 Root hash: 2011e2c62f5c88ff8f96d8e258da508f38f8ae9704a45af390a3b03385537fa392b672a196fd53f2ee2de23818f7fd44 (root) ConsistencyTestingToolState / acquire-point-twist-then 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 then-purse-whip-snow 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 9199473051394834431 /3 choose-trip-seven-gap 4 StringLeaf 534 /4 kind-book-lady-group
node2 5m 57.527s 2025-10-21 17:53:01.922 6443 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+52+39.428844367Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 57.527s 2025-10-21 17:53:01.922 6444 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 507 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+52+39.428844367Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 57.527s 2025-10-21 17:53:01.922 6445 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 5m 57.528s 2025-10-21 17:53:01.923 6446 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 5m 57.528s 2025-10-21 17:53:01.923 6447 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 534 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/534 {"round":534,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/534/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 5m 57.530s 2025-10-21 17:53:01.925 6448 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/56
node1 5m 57.540s 2025-10-21 17:53:01.935 6313 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 534 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/534
node1 5m 57.541s 2025-10-21 17:53:01.936 6314 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 534
node3 5m 57.573s 2025-10-21 17:53:01.968 6375 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 534 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/534
node3 5m 57.574s 2025-10-21 17:53:01.969 6376 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 534
node1 5m 57.632s 2025-10-21 17:53:02.027 6349 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 534
node1 5m 57.634s 2025-10-21 17:53:02.029 6350 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 534 Timestamp: 2025-10-21T17:53:00.414510Z Next consensus number: 11490 Legacy running event hash: a7d27c8c7ec385c5f1c2ea2ffc50c732ea1cad4e97768a933aba6ca2ccefaf03fa48e7eea49acc6a916763e11d3d4b28 Legacy running event mnemonic: alarm-muscle-such-electric Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -38090079 Root hash: 2011e2c62f5c88ff8f96d8e258da508f38f8ae9704a45af390a3b03385537fa392b672a196fd53f2ee2de23818f7fd44 (root) ConsistencyTestingToolState / acquire-point-twist-then 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 then-purse-whip-snow 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 9199473051394834431 /3 choose-trip-seven-gap 4 StringLeaf 534 /4 kind-book-lady-group
node1 5m 57.640s 2025-10-21 17:53:02.035 6351 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+52+39.696996159Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 5m 57.640s 2025-10-21 17:53:02.035 6352 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 507 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+52+39.696996159Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 57.640s 2025-10-21 17:53:02.035 6353 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 5m 57.641s 2025-10-21 17:53:02.036 6354 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 5m 57.642s 2025-10-21 17:53:02.037 6355 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 534 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/534 {"round":534,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/534/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 5m 57.643s 2025-10-21 17:53:02.038 6356 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/56
node3 5m 57.653s 2025-10-21 17:53:02.048 6411 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 534
node3 5m 57.655s 2025-10-21 17:53:02.050 6412 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 534 Timestamp: 2025-10-21T17:53:00.414510Z Next consensus number: 11490 Legacy running event hash: a7d27c8c7ec385c5f1c2ea2ffc50c732ea1cad4e97768a933aba6ca2ccefaf03fa48e7eea49acc6a916763e11d3d4b28 Legacy running event mnemonic: alarm-muscle-such-electric Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -38090079 Root hash: 2011e2c62f5c88ff8f96d8e258da508f38f8ae9704a45af390a3b03385537fa392b672a196fd53f2ee2de23818f7fd44 (root) ConsistencyTestingToolState / acquire-point-twist-then 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 then-purse-whip-snow 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 9199473051394834431 /3 choose-trip-seven-gap 4 StringLeaf 534 /4 kind-book-lady-group
node3 5m 57.661s 2025-10-21 17:53:02.056 6413 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+52+39.703555956Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 5m 57.662s 2025-10-21 17:53:02.057 6414 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 507 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+52+39.703555956Z_seq1_minr474_maxr5474_orgn0.pces
node3 5m 57.662s 2025-10-21 17:53:02.057 6415 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 5m 57.662s 2025-10-21 17:53:02.057 6416 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 5m 57.663s 2025-10-21 17:53:02.058 6417 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 534 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/534 {"round":534,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/534/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 5m 57.664s 2025-10-21 17:53:02.059 6420 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/56
node4 5m 58.373s 2025-10-21 17:53:02.768 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1205ms
node4 5m 58.375s 2025-10-21 17:53:02.770 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 5m 58.380s 2025-10-21 17:53:02.775 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 58.439s 2025-10-21 17:53:02.834 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 5m 58.516s 2025-10-21 17:53:02.911 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 5m 58.517s 2025-10-21 17:53:02.912 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 6.010m 2025-10-21 17:53:05.016 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 6.012m 2025-10-21 17:53:05.112 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 6.012m 2025-10-21 17:53:05.120 21 INFO STARTUP <main> StartupStateUtils: The following saved states were found on disk:
- /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/260/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/152/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/56/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/SignedState.swh
node4 6.012m 2025-10-21 17:53:05.121 22 INFO STARTUP <main> StartupStateUtils: Loading latest state from disk.
node4 6.012m 2025-10-21 17:53:05.121 23 INFO STARTUP <main> StartupStateUtils: Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/260/SignedState.swh
node4 6.012m 2025-10-21 17:53:05.126 24 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 6.012m 2025-10-21 17:53:05.133 25 INFO STATE_TO_DISK <main> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp
node4 6.015m 2025-10-21 17:53:05.285 37 INFO STARTUP <main> StartupStateUtils: Loaded state's hash is the same as when it was saved.
node4 6.015m 2025-10-21 17:53:05.288 38 INFO STARTUP <main> StartupStateUtils: Platform has loaded a saved state {"round":260,"consensusTimestamp":"2025-10-21T17:50:00.499448Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload]
node4 6.015m 2025-10-21 17:53:05.291 41 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 6.015m 2025-10-21 17:53:05.293 44 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 6.015m 2025-10-21 17:53:05.296 45 INFO STARTUP <main> AddressBookInitializer: Using the loaded state's address book and weight values.
node4 6.015m 2025-10-21 17:53:05.303 46 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 6.015m 2025-10-21 17:53:05.305 47 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 6m 1.947s 2025-10-21 17:53:06.342 48 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26309852] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=185700, randomLong=8371595450418098385, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=27900, randomLong=-2191079299800899496, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1761160, data=35, exception=null] OS Health Check Report - Complete (took 1023 ms)
node4 6m 1.984s 2025-10-21 17:53:06.379 49 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 6m 2.089s 2025-10-21 17:53:06.484 50 INFO STARTUP <main> PcesUtilities: Span compaction completed for data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 287
node4 6m 2.092s 2025-10-21 17:53:06.487 51 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 6m 2.099s 2025-10-21 17:53:06.494 52 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 6m 2.187s 2025-10-21 17:53:06.582 53 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Iixteg==", "port": 30124 }, { "ipAddressV4": "CoAACg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhvR0w==", "port": 30125 }, { "ipAddressV4": "CoAACQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Igmj1Q==", "port": 30126 }, { "ipAddressV4": "CoAAXQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IhwBrA==", "port": 30127 }, { "ipAddressV4": "CoAAAg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+H5iw==", "port": 30128 }, { "ipAddressV4": "CoAABg==", "port": 30128 }] }] }
node4 6m 2.209s 2025-10-21 17:53:06.604 54 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with state long 4579450462933924634.
node4 6m 2.210s 2025-10-21 17:53:06.605 55 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with 260 rounds handled.
node4 6m 2.211s 2025-10-21 17:53:06.606 56 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6m 2.211s 2025-10-21 17:53:06.606 57 INFO STARTUP <main> TransactionHandlingHistory: Log file found. Parsing previous history
node4 6m 3.085s 2025-10-21 17:53:07.480 58 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 260 Timestamp: 2025-10-21T17:50:00.499448Z Next consensus number: 6602 Legacy running event hash: 1e676717710e2ed70e694326c9e684ef92213f84006bdda9fc51f1022a1909916417e70fdb02448b1183b0f2b05488fa Legacy running event mnemonic: smooth-title-canyon-sustain Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1613638831 Root hash: fc28b562ac0c0fc2a3195330bb8d9eb3315a180e6e5c37293004f98a61805cccf25b44b9a9746cf0a71d03f5c2b07e9f (root) ConsistencyTestingToolState / kitchen-fatigue-leader-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spray-segment-dose-infant 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 4579450462933924634 /3 also-year-jelly-concert 4 StringLeaf 260 /4 lawn-delay-virtual-fine
node4 6m 3.341s 2025-10-21 17:53:07.736 60 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 1e676717710e2ed70e694326c9e684ef92213f84006bdda9fc51f1022a1909916417e70fdb02448b1183b0f2b05488fa
node4 6m 3.355s 2025-10-21 17:53:07.750 61 INFO STARTUP <platformForkJoinThread-3> Shadowgraph: Shadowgraph starting from expiration threshold 232
node4 6m 3.364s 2025-10-21 17:53:07.759 63 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 6m 3.365s 2025-10-21 17:53:07.760 64 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 6m 3.367s 2025-10-21 17:53:07.762 65 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 6m 3.372s 2025-10-21 17:53:07.767 66 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 6m 3.375s 2025-10-21 17:53:07.770 67 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 6m 3.377s 2025-10-21 17:53:07.772 68 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 6m 3.382s 2025-10-21 17:53:07.777 69 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 232
node4 6m 3.390s 2025-10-21 17:53:07.785 70 INFO PLATFORM_STATUS <platformForkJoinThread-5> DefaultStatusStateMachine: Platform spent 215.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 6m 3.588s 2025-10-21 17:53:07.983 71 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:4 H:3cb743342dbd BR:257), num remaining: 4
node4 6m 3.589s 2025-10-21 17:53:07.984 72 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:a5a1a9a919ff BR:257), num remaining: 3
node4 6m 3.590s 2025-10-21 17:53:07.985 73 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:368d42e38a26 BR:257), num remaining: 2
node4 6m 3.590s 2025-10-21 17:53:07.985 74 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:d2fbafaeab78 BR:258), num remaining: 1
node4 6m 3.591s 2025-10-21 17:53:07.986 75 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:5352119c72e3 BR:257), num remaining: 0
node4 6m 3.735s 2025-10-21 17:53:08.130 144 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 1,316 preconsensus events with max birth round 287. These events contained 3,357 transactions. 26 rounds reached consensus spanning 14.8 seconds of consensus time. The latest round to reach consensus is round 286. Replay took 350.0 milliseconds.
node4 6m 3.737s 2025-10-21 17:53:08.132 147 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 6m 3.739s 2025-10-21 17:53:08.134 148 INFO PLATFORM_STATUS <platformForkJoinThread-1> DefaultStatusStateMachine: Platform spent 345.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 6m 4.614s 2025-10-21 17:53:09.009 303 INFO PLATFORM_STATUS <platformForkJoinThread-4> DefaultStatusStateMachine: Platform spent 873.0 ms in OBSERVING. Now in BEHIND
node4 6m 4.615s 2025-10-21 17:53:09.010 304 INFO RECONNECT <platformForkJoinThread-3> ReconnectController: Starting ReconnectController
node4 6m 4.616s 2025-10-21 17:53:09.011 305 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Preparing for reconnect, stopping gossip
node4 6m 4.617s 2025-10-21 17:53:09.012 306 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Preparing for reconnect, start clearing queues
node4 6m 4.619s 2025-10-21 17:53:09.014 307 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Queues have been cleared
node4 6m 4.622s 2025-10-21 17:53:09.017 308 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectController: waiting for reconnect connection
node4 6m 4.622s 2025-10-21 17:53:09.017 309 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectController: acquired reconnect connection
node2 6m 4.852s 2025-10-21 17:53:09.247 6615 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Starting reconnect in the role of the sender {"receiving":false,"nodeId":2,"otherNodeId":4,"round":548} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node2 6m 4.853s 2025-10-21 17:53:09.248 6616 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: The following state will be sent to the learner:
Round: 548 Timestamp: 2025-10-21T17:53:07.884791167Z Next consensus number: 11672 Legacy running event hash: 891e735c52ffaed1c7b931734e3c181d67d09d1db1e28a5f65120bbb8a1cce768c7b33c7974ad79798c23ec2046af1fb Legacy running event mnemonic: sword-high-garment-chalk Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 221190079 Root hash: af8dd031c4485ff3649e459a869891b8d98ea700aacfe7dc3b635be6dd25439f2e2066d22920c2cc60e8f94106585233 (root) ConsistencyTestingToolState / attack-wheat-jacket-wrist 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 claw-group-fuel-actual 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 3994182150700436948 /3 idea-noodle-vital-pledge 4 StringLeaf 548 /4 sleep-day-gentle-spare
node2 6m 4.853s 2025-10-21 17:53:09.248 6617 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Sending signatures from nodes 0, 2, 3 (signing weight = 37500000000/50000000000) for state hash af8dd031c4485ff3649e459a869891b8d98ea700aacfe7dc3b635be6dd25439f2e2066d22920c2cc60e8f94106585233
node2 6m 4.853s 2025-10-21 17:53:09.248 6618 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Starting synchronization in the role of the sender.
node2 6m 4.858s 2025-10-21 17:53:09.253 6619 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node2 6m 4.867s 2025-10-21 17:53:09.262 6620 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@4dbdc33a start run()
node4 6m 4.919s 2025-10-21 17:53:09.314 310 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":286} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node4 6m 4.921s 2025-10-21 17:53:09.316 311 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Receiving signed state signatures
node4 6m 4.924s 2025-10-21 17:53:09.319 312 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Received signatures from nodes 0, 2, 3
node4 6m 4.929s 2025-10-21 17:53:09.324 313 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls receiveTree()
node4 6m 4.929s 2025-10-21 17:53:09.324 314 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: synchronizing tree
node4 6m 4.930s 2025-10-21 17:53:09.325 315 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: receiving tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node4 6m 4.940s 2025-10-21 17:53:09.335 316 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@25011785 start run()
node4 6m 4.943s 2025-10-21 17:53:09.338 317 INFO STARTUP <<work group learning-synchronizer: async-input-stream #0>> ConsistencyTestingToolState: New State Constructed.
node4 6m 5.155s 2025-10-21 17:53:09.550 339 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 6m 5.156s 2025-10-21 17:53:09.551 340 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 6m 5.156s 2025-10-21 17:53:09.551 341 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@25011785 finish run()
node4 6m 5.158s 2025-10-21 17:53:09.553 342 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: received tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node4 6m 5.158s 2025-10-21 17:53:09.553 343 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route [1]
node4 6m 5.162s 2025-10-21 17:53:09.557 344 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3da6ac63 start run()
node2 6m 5.226s 2025-10-21 17:53:09.621 6642 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@4dbdc33a finish run()
node2 6m 5.226s 2025-10-21 17:53:09.621 6643 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: finished sending tree
node2 6m 5.227s 2025-10-21 17:53:09.622 6644 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.virtualmap.VirtualMap with route [1]
node2 6m 5.228s 2025-10-21 17:53:09.623 6645 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@50ec7742 start run()
node4 6m 5.406s 2025-10-21 17:53:09.801 345 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): firstLeafPath: 1 -> 1, lastLeafPath: 1 -> 1
node4 6m 5.407s 2025-10-21 17:53:09.802 346 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): done
node4 6m 5.410s 2025-10-21 17:53:09.805 347 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 6m 5.411s 2025-10-21 17:53:09.806 348 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call nodeRemover.allNodesReceived()
node4 6m 5.412s 2025-10-21 17:53:09.807 349 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived()
node4 6m 5.412s 2025-10-21 17:53:09.807 350 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived(): done
node4 6m 5.413s 2025-10-21 17:53:09.808 351 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call root.endLearnerReconnect()
node4 6m 5.413s 2025-10-21 17:53:09.808 352 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call reconnectIterator.close()
node4 6m 5.413s 2025-10-21 17:53:09.808 353 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call setHashPrivate()
node2 6m 5.480s 2025-10-21 17:53:09.875 6649 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@50ec7742 finish run()
node2 6m 5.480s 2025-10-21 17:53:09.875 6650 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: finished sending tree
node2 6m 5.484s 2025-10-21 17:53:09.879 6653 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Finished synchronization in the role of the sender.
node4 6m 5.588s 2025-10-21 17:53:09.983 363 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call postInit()
node4 6m 5.589s 2025-10-21 17:53:09.984 365 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: endLearnerReconnect() complete
node4 6m 5.589s 2025-10-21 17:53:09.984 366 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: close() complete
node4 6m 5.589s 2025-10-21 17:53:09.984 367 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 6m 5.589s 2025-10-21 17:53:09.984 368 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3da6ac63 finish run()
node4 6m 5.590s 2025-10-21 17:53:09.985 369 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: received tree rooted at com.swirlds.virtualmap.VirtualMap with route [1]
node4 6m 5.590s 2025-10-21 17:53:09.985 370 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: synchronization complete
node4 6m 5.591s 2025-10-21 17:53:09.986 371 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls initialize()
node4 6m 5.591s 2025-10-21 17:53:09.986 372 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: initializing tree
node4 6m 5.591s 2025-10-21 17:53:09.986 373 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: initialization complete
node4 6m 5.592s 2025-10-21 17:53:09.987 374 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls hash()
node4 6m 5.592s 2025-10-21 17:53:09.987 375 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: hashing tree
node4 6m 5.593s 2025-10-21 17:53:09.988 376 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: hashing complete
node4 6m 5.593s 2025-10-21 17:53:09.988 377 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls logStatistics()
node4 6m 5.597s 2025-10-21 17:53:09.992 378 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: Finished synchronization {"timeInSeconds":0.66,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":12,"leafNodes":7,"redundantLeafNodes":4,"internalNodes":5,"redundantInternalNodes":2} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload]
node4 6m 5.597s 2025-10-21 17:53:09.992 379 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: ReconnectMapMetrics: transfersFromTeacher=12; transfersFromLearner=10; internalHashes=0; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=3; leafCleanHashes=3; leafData=7; leafCleanData=4
node4 6m 5.597s 2025-10-21 17:53:09.992 380 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner is done synchronizing
node4 6m 5.601s 2025-10-21 17:53:09.996 381 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Reconnect data usage report {"dataMegabytes":0.006054878234863281} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload]
node4 6m 5.606s 2025-10-21 17:53:10.001 382 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":548,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6m 5.607s 2025-10-21 17:53:10.002 383 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Information for state received during reconnect:
Round: 548 Timestamp: 2025-10-21T17:53:07.884791167Z Next consensus number: 11672 Legacy running event hash: 891e735c52ffaed1c7b931734e3c181d67d09d1db1e28a5f65120bbb8a1cce768c7b33c7974ad79798c23ec2046af1fb Legacy running event mnemonic: sword-high-garment-chalk Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 221190079 Root hash: af8dd031c4485ff3649e459a869891b8d98ea700aacfe7dc3b635be6dd25439f2e2066d22920c2cc60e8f94106585233 (root) ConsistencyTestingToolState / attack-wheat-jacket-wrist 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 claw-group-fuel-actual 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 3994182150700436948 /3 idea-noodle-vital-pledge 4 StringLeaf 548 /4 sleep-day-gentle-spare
node4 6m 5.609s 2025-10-21 17:53:10.004 385 DEBUG RECONNECT <<reconnect: reconnect-controller>> ReconnectStateLoader: `loadReconnectState` : reloading state
node4 6m 5.609s 2025-10-21 17:53:10.004 386 INFO STARTUP <<reconnect: reconnect-controller>> ConsistencyTestingToolState: State initialized with state long 3994182150700436948.
node4 6m 5.610s 2025-10-21 17:53:10.005 387 INFO STARTUP <<reconnect: reconnect-controller>> ConsistencyTestingToolState: State initialized with 548 rounds handled.
node4 6m 5.610s 2025-10-21 17:53:10.005 388 INFO STARTUP <<reconnect: reconnect-controller>> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6m 5.610s 2025-10-21 17:53:10.005 389 INFO STARTUP <<reconnect: reconnect-controller>> TransactionHandlingHistory: Log file found. Parsing previous history
node4 6m 5.634s 2025-10-21 17:53:10.029 394 INFO STATE_TO_DISK <<reconnect: reconnect-controller>> DefaultSavedStateController: Signed state from round 548 created, will eventually be written to disk, for reason: RECONNECT
node4 6m 5.635s 2025-10-21 17:53:10.030 395 INFO PLATFORM_STATUS <platformForkJoinThread-6> DefaultStatusStateMachine: Platform spent 1.0 s in BEHIND. Now in RECONNECT_COMPLETE
node4 6m 5.636s 2025-10-21 17:53:10.031 396 INFO STARTUP <platformForkJoinThread-1> Shadowgraph: Shadowgraph starting from expiration threshold 521
node4 6m 5.639s 2025-10-21 17:53:10.034 399 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 548 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/548
node4 6m 5.641s 2025-10-21 17:53:10.036 400 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 548
node4 6m 5.648s 2025-10-21 17:53:10.043 403 INFO EVENT_STREAM <<reconnect: reconnect-controller>> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 891e735c52ffaed1c7b931734e3c181d67d09d1db1e28a5f65120bbb8a1cce768c7b33c7974ad79798c23ec2046af1fb
node4 6m 5.651s 2025-10-21 17:53:10.046 408 INFO STARTUP <platformForkJoinThread-3> PcesFileManager: Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr287_orgn0.pces. All future files will have an origin round of 548.
node2 6m 5.675s 2025-10-21 17:53:10.070 6654 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Finished reconnect in the role of the sender. {"receiving":false,"nodeId":2,"otherNodeId":4,"round":548,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6m 5.810s 2025-10-21 17:53:10.205 439 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 548
node4 6m 5.813s 2025-10-21 17:53:10.208 440 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 548 Timestamp: 2025-10-21T17:53:07.884791167Z Next consensus number: 11672 Legacy running event hash: 891e735c52ffaed1c7b931734e3c181d67d09d1db1e28a5f65120bbb8a1cce768c7b33c7974ad79798c23ec2046af1fb Legacy running event mnemonic: sword-high-garment-chalk Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 221190079 Root hash: af8dd031c4485ff3649e459a869891b8d98ea700aacfe7dc3b635be6dd25439f2e2066d22920c2cc60e8f94106585233 (root) ConsistencyTestingToolState / attack-wheat-jacket-wrist 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 claw-group-fuel-actual 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 3994182150700436948 /3 idea-noodle-vital-pledge 4 StringLeaf 548 /4 sleep-day-gentle-spare
node4 6m 5.861s 2025-10-21 17:53:10.256 441 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr287_orgn0.pces
node4 6m 5.862s 2025-10-21 17:53:10.257 442 WARN STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: No preconsensus event files meeting specified criteria found to copy. Lower bound: 521
node4 6m 5.871s 2025-10-21 17:53:10.266 443 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 548 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/548 {"round":548,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/548/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 5.875s 2025-10-21 17:53:10.270 444 INFO PLATFORM_STATUS <platformForkJoinThread-3> DefaultStatusStateMachine: Platform spent 238.0 ms in RECONNECT_COMPLETE. Now in CHECKING
node4 6m 6.378s 2025-10-21 17:53:10.773 445 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 6m 6.381s 2025-10-21 17:53:10.776 446 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 6m 6.575s 2025-10-21 17:53:10.970 447 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:e5d2cb180007 BR:546), num remaining: 3
node4 6m 6.579s 2025-10-21 17:53:10.974 448 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:75645fab861f BR:547), num remaining: 2
node4 6m 6.580s 2025-10-21 17:53:10.975 449 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:b9071b06096f BR:546), num remaining: 1
node4 6m 6.580s 2025-10-21 17:53:10.975 450 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:af382a974b5a BR:547), num remaining: 0
node4 6m 9.152s 2025-10-21 17:53:13.547 538 INFO PLATFORM_STATUS <platformForkJoinThread-5> DefaultStatusStateMachine: Platform spent 3.3 s in CHECKING. Now in ACTIVE
node1 6m 57.273s 2025-10-21 17:54:01.668 7484 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 637 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 6m 57.434s 2025-10-21 17:54:01.829 1416 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 637 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 6m 57.450s 2025-10-21 17:54:01.845 7666 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 637 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 6m 57.480s 2025-10-21 17:54:01.875 7572 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 637 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 6m 57.614s 2025-10-21 17:54:02.009 7609 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 637 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 6m 57.735s 2025-10-21 17:54:02.130 7612 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 637 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/637
node2 6m 57.736s 2025-10-21 17:54:02.131 7613 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 637
node0 6m 57.806s 2025-10-21 17:54:02.201 7679 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 637 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/637
node0 6m 57.807s 2025-10-21 17:54:02.202 7680 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 637
node2 6m 57.822s 2025-10-21 17:54:02.217 7646 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 637
node2 6m 57.824s 2025-10-21 17:54:02.219 7647 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 637 Timestamp: 2025-10-21T17:54:00.403195383Z Next consensus number: 13811 Legacy running event hash: fb9f0f1ad5ba3048a0c8e1ce2cc2a9792c54d7b4149f23398e3004ca1bb7e2e6b69a25756277867d502d07f47970f391 Legacy running event mnemonic: method-today-emotion-animal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1534147287 Root hash: 604344b09ec94e44d996692a5db3095fcaec8a569658090ff69e2cc683a484acb499712fbfdba6e5b3f5ab1d2cea0f6b (root) ConsistencyTestingToolState / anchor-noodle-cheap-silk 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cube-citizen-april-ability 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -2069734304459148833 /3 black-food-faculty-work 4 StringLeaf 637 /4 around-donkey-claim-skin
node2 6m 57.830s 2025-10-21 17:54:02.225 7648 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+52+39.428844367Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 57.830s 2025-10-21 17:54:02.225 7649 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 608 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+52+39.428844367Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 57.831s 2025-10-21 17:54:02.226 7650 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 6m 57.833s 2025-10-21 17:54:02.228 7651 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 6m 57.833s 2025-10-21 17:54:02.228 7652 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 637 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/637 {"round":637,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/637/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 6m 57.835s 2025-10-21 17:54:02.230 7653 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/152
node4 6m 57.836s 2025-10-21 17:54:02.231 1419 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 637 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/637
node4 6m 57.837s 2025-10-21 17:54:02.232 1420 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 637
node0 6m 57.890s 2025-10-21 17:54:02.285 7713 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 637
node0 6m 57.892s 2025-10-21 17:54:02.287 7714 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 637 Timestamp: 2025-10-21T17:54:00.403195383Z Next consensus number: 13811 Legacy running event hash: fb9f0f1ad5ba3048a0c8e1ce2cc2a9792c54d7b4149f23398e3004ca1bb7e2e6b69a25756277867d502d07f47970f391 Legacy running event mnemonic: method-today-emotion-animal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1534147287 Root hash: 604344b09ec94e44d996692a5db3095fcaec8a569658090ff69e2cc683a484acb499712fbfdba6e5b3f5ab1d2cea0f6b (root) ConsistencyTestingToolState / anchor-noodle-cheap-silk 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cube-citizen-april-ability 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -2069734304459148833 /3 black-food-faculty-work 4 StringLeaf 637 /4 around-donkey-claim-skin
node0 6m 57.898s 2025-10-21 17:54:02.293 7715 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+52+39.364097322Z_seq1_minr474_maxr5474_orgn0.pces
node0 6m 57.901s 2025-10-21 17:54:02.296 7716 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 608 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+52+39.364097322Z_seq1_minr474_maxr5474_orgn0.pces
node0 6m 57.901s 2025-10-21 17:54:02.296 7717 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 6m 57.903s 2025-10-21 17:54:02.298 7718 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 6m 57.904s 2025-10-21 17:54:02.299 7719 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 637 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/637 {"round":637,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/637/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 6m 57.905s 2025-10-21 17:54:02.300 7720 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/152
node4 6m 57.954s 2025-10-21 17:54:02.349 1456 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/6 for round 637
node4 6m 57.957s 2025-10-21 17:54:02.352 1457 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 637 Timestamp: 2025-10-21T17:54:00.403195383Z Next consensus number: 13811 Legacy running event hash: fb9f0f1ad5ba3048a0c8e1ce2cc2a9792c54d7b4149f23398e3004ca1bb7e2e6b69a25756277867d502d07f47970f391 Legacy running event mnemonic: method-today-emotion-animal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1534147287 Root hash: 604344b09ec94e44d996692a5db3095fcaec8a569658090ff69e2cc683a484acb499712fbfdba6e5b3f5ab1d2cea0f6b (root) ConsistencyTestingToolState / anchor-noodle-cheap-silk 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cube-citizen-april-ability 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -2069734304459148833 /3 black-food-faculty-work 4 StringLeaf 637 /4 around-donkey-claim-skin
node4 6m 57.966s 2025-10-21 17:54:02.361 1458 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+53+10.406338739Z_seq1_minr521_maxr1021_orgn548.pces Last file: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr287_orgn0.pces
node4 6m 57.967s 2025-10-21 17:54:02.362 1459 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 608 File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+53+10.406338739Z_seq1_minr521_maxr1021_orgn548.pces
node4 6m 57.967s 2025-10-21 17:54:02.362 1460 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 6m 57.969s 2025-10-21 17:54:02.364 1461 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 6m 57.970s 2025-10-21 17:54:02.365 1462 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 637 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/637 {"round":637,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/637/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 57.971s 2025-10-21 17:54:02.366 1463 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1
node3 6m 58.030s 2025-10-21 17:54:02.425 7591 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 637 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/637
node3 6m 58.030s 2025-10-21 17:54:02.425 7592 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 637
node1 6m 58.083s 2025-10-21 17:54:02.478 7509 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 637 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/637
node1 6m 58.084s 2025-10-21 17:54:02.479 7510 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 637
node3 6m 58.122s 2025-10-21 17:54:02.517 7629 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 637
node3 6m 58.124s 2025-10-21 17:54:02.519 7630 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 637 Timestamp: 2025-10-21T17:54:00.403195383Z Next consensus number: 13811 Legacy running event hash: fb9f0f1ad5ba3048a0c8e1ce2cc2a9792c54d7b4149f23398e3004ca1bb7e2e6b69a25756277867d502d07f47970f391 Legacy running event mnemonic: method-today-emotion-animal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1534147287 Root hash: 604344b09ec94e44d996692a5db3095fcaec8a569658090ff69e2cc683a484acb499712fbfdba6e5b3f5ab1d2cea0f6b (root) ConsistencyTestingToolState / anchor-noodle-cheap-silk 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cube-citizen-april-ability 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -2069734304459148833 /3 black-food-faculty-work 4 StringLeaf 637 /4 around-donkey-claim-skin
node3 6m 58.133s 2025-10-21 17:54:02.528 7631 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+52+39.703555956Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 6m 58.136s 2025-10-21 17:54:02.531 7632 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 608 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+52+39.703555956Z_seq1_minr474_maxr5474_orgn0.pces
node3 6m 58.136s 2025-10-21 17:54:02.531 7633 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 6m 58.139s 2025-10-21 17:54:02.534 7634 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 6m 58.140s 2025-10-21 17:54:02.535 7635 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 637 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/637 {"round":637,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/637/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 6m 58.142s 2025-10-21 17:54:02.537 7636 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/152
node1 6m 58.173s 2025-10-21 17:54:02.568 7551 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 637
node1 6m 58.175s 2025-10-21 17:54:02.570 7552 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 637 Timestamp: 2025-10-21T17:54:00.403195383Z Next consensus number: 13811 Legacy running event hash: fb9f0f1ad5ba3048a0c8e1ce2cc2a9792c54d7b4149f23398e3004ca1bb7e2e6b69a25756277867d502d07f47970f391 Legacy running event mnemonic: method-today-emotion-animal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1534147287 Root hash: 604344b09ec94e44d996692a5db3095fcaec8a569658090ff69e2cc683a484acb499712fbfdba6e5b3f5ab1d2cea0f6b (root) ConsistencyTestingToolState / anchor-noodle-cheap-silk 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cube-citizen-april-ability 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf -2069734304459148833 /3 black-food-faculty-work 4 StringLeaf 637 /4 around-donkey-claim-skin
node1 6m 58.181s 2025-10-21 17:54:02.576 7553 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+52+39.696996159Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 6m 58.184s 2025-10-21 17:54:02.579 7554 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 608 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+52+39.696996159Z_seq1_minr474_maxr5474_orgn0.pces
node1 6m 58.184s 2025-10-21 17:54:02.579 7555 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 6m 58.186s 2025-10-21 17:54:02.581 7556 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 6m 58.187s 2025-10-21 17:54:02.582 7557 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 637 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/637 {"round":637,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/637/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 6m 58.188s 2025-10-21 17:54:02.583 7558 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/152
node4 7m 56.976s 2025-10-21 17:55:01.371 2499 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 732 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 7m 57.043s 2025-10-21 17:55:01.438 8642 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 732 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 7m 57.120s 2025-10-21 17:55:01.515 8818 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 732 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 7m 57.151s 2025-10-21 17:55:01.546 8686 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 732 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 7m 57.175s 2025-10-21 17:55:01.570 8725 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 732 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 7m 57.411s 2025-10-21 17:55:01.806 8821 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 732 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/732
node0 7m 57.411s 2025-10-21 17:55:01.806 8822 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/41 for round 732
node0 7m 57.489s 2025-10-21 17:55:01.884 8855 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/41 for round 732
node0 7m 57.491s 2025-10-21 17:55:01.886 8856 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 732 Timestamp: 2025-10-21T17:55:00.164778Z Next consensus number: 16326 Legacy running event hash: 42c6d14d65e4e61bf77f5e94014b02a092ef893cdbbe13a2da16fbff6e39f09c7e3a73658750128955c8725cc965cd6a Legacy running event mnemonic: flee-stool-route-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 635996513 Root hash: 5013b02783ef1bd54bfc6f55f0bd8da232645876c863abea3759f4252e8aceb4da910d3d842677bd74a76433ad5c4c04 (root) ConsistencyTestingToolState / act-will-lion-kingdom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 solar-credit-spawn-clap 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 3322135834174446534 /3 hold-motor-claim-crazy 4 StringLeaf 732 /4 basket-album-round-grab
node0 7m 57.496s 2025-10-21 17:55:01.891 8857 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+47+20.649029933Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+52+39.364097322Z_seq1_minr474_maxr5474_orgn0.pces
node0 7m 57.496s 2025-10-21 17:55:01.891 8858 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 705 File: data/saved/preconsensus-events/0/2025/10/21/2025-10-21T17+52+39.364097322Z_seq1_minr474_maxr5474_orgn0.pces
node0 7m 57.497s 2025-10-21 17:55:01.892 8859 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 7m 57.500s 2025-10-21 17:55:01.895 8689 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 732 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/732
node0 7m 57.501s 2025-10-21 17:55:01.896 8860 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 7m 57.501s 2025-10-21 17:55:01.896 8861 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 732 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/732 {"round":732,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/732/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 7m 57.501s 2025-10-21 17:55:01.896 8690 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/41 for round 732
node0 7m 57.502s 2025-10-21 17:55:01.897 8862 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/260
node4 7m 57.523s 2025-10-21 17:55:01.918 2518 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 732 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/732
node4 7m 57.524s 2025-10-21 17:55:01.919 2519 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 732
node3 7m 57.587s 2025-10-21 17:55:01.982 8731 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/41 for round 732
node3 7m 57.589s 2025-10-21 17:55:01.984 8732 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 732 Timestamp: 2025-10-21T17:55:00.164778Z Next consensus number: 16326 Legacy running event hash: 42c6d14d65e4e61bf77f5e94014b02a092ef893cdbbe13a2da16fbff6e39f09c7e3a73658750128955c8725cc965cd6a Legacy running event mnemonic: flee-stool-route-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 635996513 Root hash: 5013b02783ef1bd54bfc6f55f0bd8da232645876c863abea3759f4252e8aceb4da910d3d842677bd74a76433ad5c4c04 (root) ConsistencyTestingToolState / act-will-lion-kingdom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 solar-credit-spawn-clap 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 3322135834174446534 /3 hold-motor-claim-crazy 4 StringLeaf 732 /4 basket-album-round-grab
node3 7m 57.595s 2025-10-21 17:55:01.990 8733 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+52+39.703555956Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+47+20.831243397Z_seq0_minr1_maxr501_orgn0.pces
node3 7m 57.595s 2025-10-21 17:55:01.990 8734 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 705 File: data/saved/preconsensus-events/3/2025/10/21/2025-10-21T17+52+39.703555956Z_seq1_minr474_maxr5474_orgn0.pces
node3 7m 57.595s 2025-10-21 17:55:01.990 8735 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 7m 57.600s 2025-10-21 17:55:01.995 8736 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 7m 57.600s 2025-10-21 17:55:01.995 8737 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 732 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/732 {"round":732,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/732/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 7m 57.601s 2025-10-21 17:55:01.996 8738 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/260
node4 7m 57.635s 2025-10-21 17:55:02.030 2559 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/11 for round 732
node4 7m 57.637s 2025-10-21 17:55:02.032 2560 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 732 Timestamp: 2025-10-21T17:55:00.164778Z Next consensus number: 16326 Legacy running event hash: 42c6d14d65e4e61bf77f5e94014b02a092ef893cdbbe13a2da16fbff6e39f09c7e3a73658750128955c8725cc965cd6a Legacy running event mnemonic: flee-stool-route-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 635996513 Root hash: 5013b02783ef1bd54bfc6f55f0bd8da232645876c863abea3759f4252e8aceb4da910d3d842677bd74a76433ad5c4c04 (root) ConsistencyTestingToolState / act-will-lion-kingdom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 solar-credit-spawn-clap 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 3322135834174446534 /3 hold-motor-claim-crazy 4 StringLeaf 732 /4 basket-album-round-grab
node4 7m 57.644s 2025-10-21 17:55:02.039 2561 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+53+10.406338739Z_seq1_minr521_maxr1021_orgn548.pces Last file: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+47+20.838533454Z_seq0_minr1_maxr287_orgn0.pces
node4 7m 57.645s 2025-10-21 17:55:02.040 2562 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 705 File: data/saved/preconsensus-events/4/2025/10/21/2025-10-21T17+53+10.406338739Z_seq1_minr521_maxr1021_orgn548.pces
node4 7m 57.645s 2025-10-21 17:55:02.040 2563 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 7m 57.647s 2025-10-21 17:55:02.042 8728 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 732 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/732
node2 7m 57.647s 2025-10-21 17:55:02.042 8729 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/41 for round 732
node4 7m 57.649s 2025-10-21 17:55:02.044 2564 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 7m 57.649s 2025-10-21 17:55:02.044 2565 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 732 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/732 {"round":732,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/732/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 7m 57.651s 2025-10-21 17:55:02.046 2566 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/56
node1 7m 57.657s 2025-10-21 17:55:02.052 8645 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 732 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/732
node1 7m 57.658s 2025-10-21 17:55:02.053 8646 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/41 for round 732
node2 7m 57.730s 2025-10-21 17:55:02.125 8774 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/41 for round 732
node2 7m 57.732s 2025-10-21 17:55:02.127 8775 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 732 Timestamp: 2025-10-21T17:55:00.164778Z Next consensus number: 16326 Legacy running event hash: 42c6d14d65e4e61bf77f5e94014b02a092ef893cdbbe13a2da16fbff6e39f09c7e3a73658750128955c8725cc965cd6a Legacy running event mnemonic: flee-stool-route-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 635996513 Root hash: 5013b02783ef1bd54bfc6f55f0bd8da232645876c863abea3759f4252e8aceb4da910d3d842677bd74a76433ad5c4c04 (root) ConsistencyTestingToolState / act-will-lion-kingdom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 solar-credit-spawn-clap 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 3322135834174446534 /3 hold-motor-claim-crazy 4 StringLeaf 732 /4 basket-album-round-grab
node2 7m 57.738s 2025-10-21 17:55:02.133 8776 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+47+21.032992755Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+52+39.428844367Z_seq1_minr474_maxr5474_orgn0.pces
node2 7m 57.738s 2025-10-21 17:55:02.133 8777 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 705 File: data/saved/preconsensus-events/2/2025/10/21/2025-10-21T17+52+39.428844367Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 57.739s 2025-10-21 17:55:02.134 8687 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/41 for round 732
node2 7m 57.739s 2025-10-21 17:55:02.134 8778 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 7m 57.740s 2025-10-21 17:55:02.135 8688 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 732 Timestamp: 2025-10-21T17:55:00.164778Z Next consensus number: 16326 Legacy running event hash: 42c6d14d65e4e61bf77f5e94014b02a092ef893cdbbe13a2da16fbff6e39f09c7e3a73658750128955c8725cc965cd6a Legacy running event mnemonic: flee-stool-route-device Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 635996513 Root hash: 5013b02783ef1bd54bfc6f55f0bd8da232645876c863abea3759f4252e8aceb4da910d3d842677bd74a76433ad5c4c04 (root) ConsistencyTestingToolState / act-will-lion-kingdom 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 solar-credit-spawn-clap 1 VirtualMap RosterService.ROSTERS /1 saddle-disorder-track-you 2 SingletonNode RosterService.ROSTER_STATE /2 surround-giant-patient-random 3 StringLeaf 3322135834174446534 /3 hold-motor-claim-crazy 4 StringLeaf 732 /4 basket-album-round-grab
node2 7m 57.743s 2025-10-21 17:55:02.138 8779 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 7m 57.743s 2025-10-21 17:55:02.138 8780 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 732 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/732 {"round":732,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/732/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 7m 57.744s 2025-10-21 17:55:02.139 8781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/260
node1 7m 57.746s 2025-10-21 17:55:02.141 8689 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+52+39.696996159Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+47+20.959386129Z_seq0_minr1_maxr501_orgn0.pces
node1 7m 57.746s 2025-10-21 17:55:02.141 8690 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 705 File: data/saved/preconsensus-events/1/2025/10/21/2025-10-21T17+52+39.696996159Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 57.746s 2025-10-21 17:55:02.141 8691 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 7m 57.750s 2025-10-21 17:55:02.145 8692 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 7m 57.751s 2025-10-21 17:55:02.146 8693 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 732 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/732 {"round":732,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/732/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 7m 57.752s 2025-10-21 17:55:02.147 8694 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/260
node2 7m 58.093s 2025-10-21 17:55:02.488 8782 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith0 2 to 0>> NetworkUtils: Connection broken: 2 <- 0
java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:55:02.488167239Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:246) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:55:02.488167239Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:145) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:279) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:217) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:184) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:105) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:243) ... 6 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:272) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:143) ... 11 more
node4 7m 58.100s 2025-10-21 17:55:02.495 2570 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith0 4 to 0>> NetworkUtils: Connection broken: 4 <- 0
java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:55:02.491698769Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:246) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:55:02.491698769Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:145) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:279) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:217) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:184) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:105) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:243) ... 6 more Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:150) ... 11 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$sendEventsTheyNeed$8(SyncUtils.java:228) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 1 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:272) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:143) ... 11 more
node4 7m 58.213s 2025-10-21 17:55:02.608 2571 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith1 4 to 1>> NetworkUtils: Connection broken: 4 <- 1
java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:55:02.607988942Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:246) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:55:02.607988942Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:145) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:279) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:217) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:184) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:105) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:243) ... 6 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:272) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:143) ... 11 more
node4 7m 58.251s 2025-10-21 17:55:02.646 2572 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith3 4 to 3>> NetworkUtils: Connection broken: 4 <- 3
java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.FilterInputStream.read(FilterInputStream.java:71) at org.hiero.base.io.streams.AugmentedDataInputStream.read(AugmentedDataInputStream.java:57) at com.swirlds.platform.network.communication.states.WaitForAcceptReject.transition(WaitForAcceptReject.java:48) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583)
node4 7m 58.478s 2025-10-21 17:55:02.873 2573 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith2 4 to 2>> NetworkUtils: Connection broken: 4 <- 2
java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:55:02.872861628Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:246) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-21T17:55:02.872861628Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:145) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:279) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:217) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:184) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:105) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:243) ... 6 more Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:150) ... 11 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$sendEventsTheyNeed$8(SyncUtils.java:228) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 1 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:272) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallel(CachedPoolParallelExecutor.java:143) ... 11 more