Node ID







Columns











Log Level





Log Marker







Class



















































node0 0.000ns 2025-10-10 18:29:08.931 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node0 86.000ms 2025-10-10 18:29:09.017 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node0 101.000ms 2025-10-10 18:29:09.032 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 170.000ms 2025-10-10 18:29:09.101 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 194.000ms 2025-10-10 18:29:09.125 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node0 209.000ms 2025-10-10 18:29:09.140 4 INFO STARTUP <main> Browser: The following nodes [0] are set to run locally
node0 215.000ms 2025-10-10 18:29:09.146 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node0 227.000ms 2025-10-10 18:29:09.158 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 260.000ms 2025-10-10 18:29:09.191 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node2 276.000ms 2025-10-10 18:29:09.207 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 281.000ms 2025-10-10 18:29:09.212 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 296.000ms 2025-10-10 18:29:09.227 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 387.000ms 2025-10-10 18:29:09.318 4 INFO STARTUP <main> Browser: The following nodes [2] are set to run locally
node2 393.000ms 2025-10-10 18:29:09.324 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node4 403.000ms 2025-10-10 18:29:09.334 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node2 406.000ms 2025-10-10 18:29:09.337 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 409.000ms 2025-10-10 18:29:09.340 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node4 422.000ms 2025-10-10 18:29:09.353 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 537.000ms 2025-10-10 18:29:09.468 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node1 624.000ms 2025-10-10 18:29:09.555 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node0 637.000ms 2025-10-10 18:29:09.568 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node0 638.000ms 2025-10-10 18:29:09.569 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node1 639.000ms 2025-10-10 18:29:09.570 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 752.000ms 2025-10-10 18:29:09.683 4 INFO STARTUP <main> Browser: The following nodes [1] are set to run locally
node1 758.000ms 2025-10-10 18:29:09.689 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node1 771.000ms 2025-10-10 18:29:09.702 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 820.000ms 2025-10-10 18:29:09.751 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node2 822.000ms 2025-10-10 18:29:09.753 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 831.000ms 2025-10-10 18:29:09.762 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node4 832.000ms 2025-10-10 18:29:09.763 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node1 1.198s 2025-10-10 18:29:10.129 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node1 1.198s 2025-10-10 18:29:10.129 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 1.525s 2025-10-10 18:29:10.456 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 886ms
node0 1.534s 2025-10-10 18:29:10.465 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node0 1.537s 2025-10-10 18:29:10.468 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 1.589s 2025-10-10 18:29:10.520 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 1.649s 2025-10-10 18:29:10.580 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node0 1.650s 2025-10-10 18:29:10.581 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 1.676s 2025-10-10 18:29:10.607 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 844ms
node4 1.686s 2025-10-10 18:29:10.617 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 1.689s 2025-10-10 18:29:10.620 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 1.719s 2025-10-10 18:29:10.650 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 897ms
node4 1.728s 2025-10-10 18:29:10.659 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node2 1.730s 2025-10-10 18:29:10.661 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node2 1.733s 2025-10-10 18:29:10.664 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 1.782s 2025-10-10 18:29:10.713 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 1.790s 2025-10-10 18:29:10.721 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 1.790s 2025-10-10 18:29:10.721 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node2 1.851s 2025-10-10 18:29:10.782 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node2 1.852s 2025-10-10 18:29:10.783 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node1 2.081s 2025-10-10 18:29:11.012 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 882ms
node1 2.092s 2025-10-10 18:29:11.023 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node1 2.095s 2025-10-10 18:29:11.026 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 2.138s 2025-10-10 18:29:11.069 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node1 2.199s 2025-10-10 18:29:11.130 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node1 2.200s 2025-10-10 18:29:11.131 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node3 2.590s 2025-10-10 18:29:11.521 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node3 2.694s 2025-10-10 18:29:11.625 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node3 2.710s 2025-10-10 18:29:11.641 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 2.824s 2025-10-10 18:29:11.755 4 INFO STARTUP <main> Browser: The following nodes [3] are set to run locally
node3 2.830s 2025-10-10 18:29:11.761 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node3 2.843s 2025-10-10 18:29:11.774 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 3.296s 2025-10-10 18:29:12.227 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node3 3.297s 2025-10-10 18:29:12.228 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 3.649s 2025-10-10 18:29:12.580 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 3.726s 2025-10-10 18:29:12.657 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 3.728s 2025-10-10 18:29:12.659 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node0 3.729s 2025-10-10 18:29:12.660 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 3.818s 2025-10-10 18:29:12.749 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 3.900s 2025-10-10 18:29:12.831 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 3.901s 2025-10-10 18:29:12.832 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 3.903s 2025-10-10 18:29:12.834 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node4 3.903s 2025-10-10 18:29:12.834 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 3.989s 2025-10-10 18:29:12.920 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 3.992s 2025-10-10 18:29:12.923 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node2 3.992s 2025-10-10 18:29:12.923 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 4.196s 2025-10-10 18:29:13.127 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node1 4.278s 2025-10-10 18:29:13.209 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.280s 2025-10-10 18:29:13.211 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node1 4.281s 2025-10-10 18:29:13.212 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 4.386s 2025-10-10 18:29:13.317 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1089ms
node3 4.395s 2025-10-10 18:29:13.326 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node3 4.398s 2025-10-10 18:29:13.329 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 4.444s 2025-10-10 18:29:13.375 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 4.467s 2025-10-10 18:29:13.398 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 4.478s 2025-10-10 18:29:13.409 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node0 4.484s 2025-10-10 18:29:13.415 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node0 4.495s 2025-10-10 18:29:13.426 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 4.497s 2025-10-10 18:29:13.428 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 4.511s 2025-10-10 18:29:13.442 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node3 4.512s 2025-10-10 18:29:13.443 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 4.681s 2025-10-10 18:29:13.612 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 4.692s 2025-10-10 18:29:13.623 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 4.699s 2025-10-10 18:29:13.630 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node4 4.711s 2025-10-10 18:29:13.642 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 4.713s 2025-10-10 18:29:13.644 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.752s 2025-10-10 18:29:13.683 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.762s 2025-10-10 18:29:13.693 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node2 4.768s 2025-10-10 18:29:13.699 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node2 4.777s 2025-10-10 18:29:13.708 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.779s 2025-10-10 18:29:13.710 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 5.092s 2025-10-10 18:29:14.023 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 5.103s 2025-10-10 18:29:14.034 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node1 5.109s 2025-10-10 18:29:14.040 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node1 5.119s 2025-10-10 18:29:14.050 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 5.121s 2025-10-10 18:29:14.052 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 5.598s 2025-10-10 18:29:14.529 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26285965] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=182400, randomLong=3878303513686414492, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=7820, randomLong=2497838120778275759, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1298420, data=35, exception=null] OS Health Check Report - Complete (took 1021 ms)
node0 5.628s 2025-10-10 18:29:14.559 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node0 5.635s 2025-10-10 18:29:14.566 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node0 5.638s 2025-10-10 18:29:14.569 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node0 5.716s 2025-10-10 18:29:14.647 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IoSf6w==", "port": 30124 }, { "ipAddressV4": "CoAASw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I9/mVQ==", "port": 30125 }, { "ipAddressV4": "CoAAVg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "aMWZ6w==", "port": 30126 }, { "ipAddressV4": "CoAAUg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "Iht2xw==", "port": 30127 }, { "ipAddressV4": "CoAAUQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+97Mw==", "port": 30128 }, { "ipAddressV4": "CoAATg==", "port": 30128 }] }] }
node0 5.737s 2025-10-10 18:29:14.668 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv
node0 5.737s 2025-10-10 18:29:14.668 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node0 5.751s 2025-10-10 18:29:14.682 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 535d3b99eee323dcaceb29f04de1a6ed68f0edef216778d67645aee352a01ea6cb8664459ab35c7424693fc099bfb44c (root) ConsistencyTestingToolState / hill-industry-casual-knee 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican
node4 5.828s 2025-10-10 18:29:14.759 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26181237] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=273810, randomLong=8536605258108059953, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=10080, randomLong=6861809727760294384, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1104129, data=35, exception=null] OS Health Check Report - Complete (took 1021 ms)
node4 5.858s 2025-10-10 18:29:14.789 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 5.865s 2025-10-10 18:29:14.796 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 5.868s 2025-10-10 18:29:14.799 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node2 5.896s 2025-10-10 18:29:14.827 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26245840] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=174520, randomLong=8179473296413779728, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11810, randomLong=-5341231325127259475, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1038200, data=35, exception=null] OS Health Check Report - Complete (took 1021 ms)
node2 5.925s 2025-10-10 18:29:14.856 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 5.932s 2025-10-10 18:29:14.863 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 5.935s 2025-10-10 18:29:14.866 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 5.944s 2025-10-10 18:29:14.875 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IoSf6w==", "port": 30124 }, { "ipAddressV4": "CoAASw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I9/mVQ==", "port": 30125 }, { "ipAddressV4": "CoAAVg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "aMWZ6w==", "port": 30126 }, { "ipAddressV4": "CoAAUg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "Iht2xw==", "port": 30127 }, { "ipAddressV4": "CoAAUQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+97Mw==", "port": 30128 }, { "ipAddressV4": "CoAATg==", "port": 30128 }] }] }
node4 5.965s 2025-10-10 18:29:14.896 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 5.965s 2025-10-10 18:29:14.896 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node0 5.966s 2025-10-10 18:29:14.897 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node0 5.970s 2025-10-10 18:29:14.901 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node0 5.976s 2025-10-10 18:29:14.907 47 INFO STARTUP <<start-node-0>> ConsistencyTestingToolMain: init called in Main for node 0.
node0 5.976s 2025-10-10 18:29:14.907 48 INFO STARTUP <<start-node-0>> SwirldsPlatform: Starting platform 0
node0 5.977s 2025-10-10 18:29:14.908 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 5.979s 2025-10-10 18:29:14.910 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 535d3b99eee323dcaceb29f04de1a6ed68f0edef216778d67645aee352a01ea6cb8664459ab35c7424693fc099bfb44c (root) ConsistencyTestingToolState / hill-industry-casual-knee 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican
node0 5.981s 2025-10-10 18:29:14.912 50 INFO STARTUP <<start-node-0>> CycleFinder: No cyclical back pressure detected in wiring model.
node0 5.982s 2025-10-10 18:29:14.913 51 INFO STARTUP <<start-node-0>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node0 5.983s 2025-10-10 18:29:14.914 52 INFO STARTUP <<start-node-0>> InputWireChecks: All input wires have been bound.
node0 5.984s 2025-10-10 18:29:14.915 53 WARN STARTUP <<start-node-0>> PcesFileTracker: No preconsensus event files available
node0 5.984s 2025-10-10 18:29:14.915 54 INFO STARTUP <<start-node-0>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node0 5.986s 2025-10-10 18:29:14.917 55 INFO STARTUP <<start-node-0>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node0 5.987s 2025-10-10 18:29:14.918 56 INFO STARTUP <<app: appMain 0>> ConsistencyTestingToolMain: run called in Main.
node0 5.989s 2025-10-10 18:29:14.920 57 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 184.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node0 5.993s 2025-10-10 18:29:14.924 58 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node2 6.014s 2025-10-10 18:29:14.945 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IoSf6w==", "port": 30124 }, { "ipAddressV4": "CoAASw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I9/mVQ==", "port": 30125 }, { "ipAddressV4": "CoAAVg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "aMWZ6w==", "port": 30126 }, { "ipAddressV4": "CoAAUg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "Iht2xw==", "port": 30127 }, { "ipAddressV4": "CoAAUQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+97Mw==", "port": 30128 }, { "ipAddressV4": "CoAATg==", "port": 30128 }] }] }
node2 6.035s 2025-10-10 18:29:14.966 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv
node2 6.036s 2025-10-10 18:29:14.967 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node2 6.049s 2025-10-10 18:29:14.980 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 535d3b99eee323dcaceb29f04de1a6ed68f0edef216778d67645aee352a01ea6cb8664459ab35c7424693fc099bfb44c (root) ConsistencyTestingToolState / hill-industry-casual-knee 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican
node4 6.185s 2025-10-10 18:29:15.116 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node4 6.189s 2025-10-10 18:29:15.120 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node4 6.193s 2025-10-10 18:29:15.124 47 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 6.194s 2025-10-10 18:29:15.125 48 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 6.195s 2025-10-10 18:29:15.126 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 6.198s 2025-10-10 18:29:15.129 50 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 6.199s 2025-10-10 18:29:15.130 51 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 6.199s 2025-10-10 18:29:15.130 52 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 6.201s 2025-10-10 18:29:15.132 53 WARN STARTUP <<start-node-4>> PcesFileTracker: No preconsensus event files available
node4 6.201s 2025-10-10 18:29:15.132 54 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node4 6.203s 2025-10-10 18:29:15.134 55 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node4 6.205s 2025-10-10 18:29:15.136 56 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 6.207s 2025-10-10 18:29:15.138 57 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 173.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 6.211s 2025-10-10 18:29:15.142 58 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node1 6.234s 2025-10-10 18:29:15.165 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26386140] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=176900, randomLong=-7342482488948621944, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=24850, randomLong=7733666014525187690, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1360570, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms)
node2 6.255s 2025-10-10 18:29:15.186 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node2 6.259s 2025-10-10 18:29:15.190 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node2 6.264s 2025-10-10 18:29:15.195 47 INFO STARTUP <<start-node-2>> ConsistencyTestingToolMain: init called in Main for node 2.
node2 6.264s 2025-10-10 18:29:15.195 48 INFO STARTUP <<start-node-2>> SwirldsPlatform: Starting platform 2
node1 6.265s 2025-10-10 18:29:15.196 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 6.265s 2025-10-10 18:29:15.196 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node2 6.268s 2025-10-10 18:29:15.199 50 INFO STARTUP <<start-node-2>> CycleFinder: No cyclical back pressure detected in wiring model.
node2 6.269s 2025-10-10 18:29:15.200 51 INFO STARTUP <<start-node-2>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node2 6.269s 2025-10-10 18:29:15.200 52 INFO STARTUP <<start-node-2>> InputWireChecks: All input wires have been bound.
node2 6.271s 2025-10-10 18:29:15.202 53 WARN STARTUP <<start-node-2>> PcesFileTracker: No preconsensus event files available
node2 6.271s 2025-10-10 18:29:15.202 54 INFO STARTUP <<start-node-2>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node1 6.273s 2025-10-10 18:29:15.204 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 6.273s 2025-10-10 18:29:15.204 55 INFO STARTUP <<start-node-2>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node2 6.274s 2025-10-10 18:29:15.205 56 INFO STARTUP <<app: appMain 2>> ConsistencyTestingToolMain: run called in Main.
node1 6.276s 2025-10-10 18:29:15.207 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node2 6.276s 2025-10-10 18:29:15.207 57 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 172.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node2 6.281s 2025-10-10 18:29:15.212 58 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node1 6.361s 2025-10-10 18:29:15.292 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IoSf6w==", "port": 30124 }, { "ipAddressV4": "CoAASw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I9/mVQ==", "port": 30125 }, { "ipAddressV4": "CoAAVg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "aMWZ6w==", "port": 30126 }, { "ipAddressV4": "CoAAUg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "Iht2xw==", "port": 30127 }, { "ipAddressV4": "CoAAUQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+97Mw==", "port": 30128 }, { "ipAddressV4": "CoAATg==", "port": 30128 }] }] }
node1 6.381s 2025-10-10 18:29:15.312 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv
node1 6.382s 2025-10-10 18:29:15.313 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node1 6.396s 2025-10-10 18:29:15.327 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 535d3b99eee323dcaceb29f04de1a6ed68f0edef216778d67645aee352a01ea6cb8664459ab35c7424693fc099bfb44c (root) ConsistencyTestingToolState / hill-industry-casual-knee 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican
node3 6.609s 2025-10-10 18:29:15.540 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node1 6.623s 2025-10-10 18:29:15.554 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node1 6.628s 2025-10-10 18:29:15.559 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node1 6.633s 2025-10-10 18:29:15.564 47 INFO STARTUP <<start-node-1>> ConsistencyTestingToolMain: init called in Main for node 1.
node1 6.633s 2025-10-10 18:29:15.564 48 INFO STARTUP <<start-node-1>> SwirldsPlatform: Starting platform 1
node1 6.635s 2025-10-10 18:29:15.566 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node1 6.638s 2025-10-10 18:29:15.569 50 INFO STARTUP <<start-node-1>> CycleFinder: No cyclical back pressure detected in wiring model.
node1 6.639s 2025-10-10 18:29:15.570 51 INFO STARTUP <<start-node-1>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node1 6.639s 2025-10-10 18:29:15.570 52 INFO STARTUP <<start-node-1>> InputWireChecks: All input wires have been bound.
node1 6.641s 2025-10-10 18:29:15.572 53 WARN STARTUP <<start-node-1>> PcesFileTracker: No preconsensus event files available
node1 6.641s 2025-10-10 18:29:15.572 54 INFO STARTUP <<start-node-1>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node1 6.643s 2025-10-10 18:29:15.574 55 INFO STARTUP <<start-node-1>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node1 6.644s 2025-10-10 18:29:15.575 56 INFO STARTUP <<app: appMain 1>> ConsistencyTestingToolMain: run called in Main.
node1 6.646s 2025-10-10 18:29:15.577 57 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 194.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node1 6.650s 2025-10-10 18:29:15.581 58 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node3 6.709s 2025-10-10 18:29:15.640 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 6.712s 2025-10-10 18:29:15.643 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node3 6.713s 2025-10-10 18:29:15.644 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 7.585s 2025-10-10 18:29:16.516 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 7.598s 2025-10-10 18:29:16.529 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node3 7.605s 2025-10-10 18:29:16.536 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node3 7.618s 2025-10-10 18:29:16.549 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 7.621s 2025-10-10 18:29:16.552 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 8.746s 2025-10-10 18:29:17.677 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26138930] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=180850, randomLong=-3905423158648200540, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=66410, randomLong=-4445870495668090447, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1245020, data=35, exception=null] OS Health Check Report - Complete (took 1023 ms)
node3 8.778s 2025-10-10 18:29:17.709 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node3 8.787s 2025-10-10 18:29:17.718 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node3 8.790s 2025-10-10 18:29:17.721 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node3 8.882s 2025-10-10 18:29:17.813 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IoSf6w==", "port": 30124 }, { "ipAddressV4": "CoAASw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I9/mVQ==", "port": 30125 }, { "ipAddressV4": "CoAAVg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "aMWZ6w==", "port": 30126 }, { "ipAddressV4": "CoAAUg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "Iht2xw==", "port": 30127 }, { "ipAddressV4": "CoAAUQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+97Mw==", "port": 30128 }, { "ipAddressV4": "CoAATg==", "port": 30128 }] }] }
node3 8.908s 2025-10-10 18:29:17.839 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv
node3 8.909s 2025-10-10 18:29:17.840 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node3 8.927s 2025-10-10 18:29:17.858 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 535d3b99eee323dcaceb29f04de1a6ed68f0edef216778d67645aee352a01ea6cb8664459ab35c7424693fc099bfb44c (root) ConsistencyTestingToolState / hill-industry-casual-knee 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican
node0 8.990s 2025-10-10 18:29:17.921 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ]
node0 8.993s 2025-10-10 18:29:17.924 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 9.146s 2025-10-10 18:29:18.077 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node3 9.152s 2025-10-10 18:29:18.083 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node3 9.159s 2025-10-10 18:29:18.090 47 INFO STARTUP <<start-node-3>> ConsistencyTestingToolMain: init called in Main for node 3.
node3 9.159s 2025-10-10 18:29:18.090 48 INFO STARTUP <<start-node-3>> SwirldsPlatform: Starting platform 3
node3 9.161s 2025-10-10 18:29:18.092 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node3 9.165s 2025-10-10 18:29:18.096 50 INFO STARTUP <<start-node-3>> CycleFinder: No cyclical back pressure detected in wiring model.
node3 9.166s 2025-10-10 18:29:18.097 51 INFO STARTUP <<start-node-3>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node3 9.167s 2025-10-10 18:29:18.098 52 INFO STARTUP <<start-node-3>> InputWireChecks: All input wires have been bound.
node3 9.169s 2025-10-10 18:29:18.100 53 WARN STARTUP <<start-node-3>> PcesFileTracker: No preconsensus event files available
node3 9.169s 2025-10-10 18:29:18.100 54 INFO STARTUP <<start-node-3>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node3 9.172s 2025-10-10 18:29:18.103 55 INFO STARTUP <<start-node-3>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node3 9.173s 2025-10-10 18:29:18.104 56 INFO STARTUP <<app: appMain 3>> ConsistencyTestingToolMain: run called in Main.
node3 9.175s 2025-10-10 18:29:18.106 57 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 189.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node3 9.182s 2025-10-10 18:29:18.113 58 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 6.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 9.206s 2025-10-10 18:29:18.137 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 9.209s 2025-10-10 18:29:18.140 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node2 9.276s 2025-10-10 18:29:18.207 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ]
node2 9.279s 2025-10-10 18:29:18.210 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node1 9.647s 2025-10-10 18:29:18.578 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ]
node1 9.650s 2025-10-10 18:29:18.581 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 12.172s 2025-10-10 18:29:21.103 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ]
node3 12.175s 2025-10-10 18:29:21.106 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node0 16.084s 2025-10-10 18:29:25.015 61 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node4 16.302s 2025-10-10 18:29:25.233 61 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node2 16.372s 2025-10-10 18:29:25.303 61 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 16.741s 2025-10-10 18:29:25.672 61 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node0 17.657s 2025-10-10 18:29:26.588 62 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node1 17.695s 2025-10-10 18:29:26.626 62 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node4 17.733s 2025-10-10 18:29:26.664 62 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node2 17.779s 2025-10-10 18:29:26.710 62 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node3 17.852s 2025-10-10 18:29:26.783 61 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node0 18.094s 2025-10-10 18:29:27.025 63 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 2.0 s in CHECKING. Now in ACTIVE
node0 18.110s 2025-10-10 18:29:27.041 65 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node1 18.132s 2025-10-10 18:29:27.063 63 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 1.4 s in CHECKING. Now in ACTIVE
node1 18.149s 2025-10-10 18:29:27.080 65 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node4 18.192s 2025-10-10 18:29:27.123 63 INFO PLATFORM_STATUS <platformForkJoinThread-7> StatusStateMachine: Platform spent 1.9 s in CHECKING. Now in ACTIVE
node4 18.207s 2025-10-10 18:29:27.138 65 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node2 18.211s 2025-10-10 18:29:27.142 63 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 1.8 s in CHECKING. Now in ACTIVE
node2 18.225s 2025-10-10 18:29:27.156 65 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node3 18.294s 2025-10-10 18:29:27.225 63 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node0 18.477s 2025-10-10 18:29:27.408 80 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2
node0 18.479s 2025-10-10 18:29:27.410 81 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node1 18.491s 2025-10-10 18:29:27.422 80 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2
node1 18.493s 2025-10-10 18:29:27.424 81 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node3 18.545s 2025-10-10 18:29:27.476 78 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2
node3 18.547s 2025-10-10 18:29:27.478 79 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node2 18.550s 2025-10-10 18:29:27.481 80 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2
node2 18.552s 2025-10-10 18:29:27.483 81 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node4 18.595s 2025-10-10 18:29:27.526 80 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node4 18.597s 2025-10-10 18:29:27.528 82 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node0 18.730s 2025-10-10 18:29:27.661 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node0 18.733s 2025-10-10 18:29:27.664 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-10-10T18:29:26.241536Z Next consensus number: 19 Legacy running event hash: 3339ddbbd71ac65ef371549d7751f2dcc03aec723e6971fbf8bdbef5bdccf8a984fe01a200a6c7e7cab3ce81ad09b421 Legacy running event mnemonic: grunt-emerge-broom-crowd Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 69440eda84b9dc50619a4436bbbcf47bce620b3e8eedad707b94da71bd66a97e55917c3187d4275fefad88502725cdf4 (root) ConsistencyTestingToolState / chuckle-swap-happy-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 net-aisle-midnight-between 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom
node1 18.741s 2025-10-10 18:29:27.672 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node1 18.745s 2025-10-10 18:29:27.676 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-10-10T18:29:26.241536Z Next consensus number: 19 Legacy running event hash: 3339ddbbd71ac65ef371549d7751f2dcc03aec723e6971fbf8bdbef5bdccf8a984fe01a200a6c7e7cab3ce81ad09b421 Legacy running event mnemonic: grunt-emerge-broom-crowd Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 69440eda84b9dc50619a4436bbbcf47bce620b3e8eedad707b94da71bd66a97e55917c3187d4275fefad88502725cdf4 (root) ConsistencyTestingToolState / chuckle-swap-happy-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 net-aisle-midnight-between 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom
node0 18.768s 2025-10-10 18:29:27.699 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 18.769s 2025-10-10 18:29:27.700 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 18.769s 2025-10-10 18:29:27.700 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 18.770s 2025-10-10 18:29:27.701 118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 18.775s 2025-10-10 18:29:27.706 119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 18.784s 2025-10-10 18:29:27.715 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 18.785s 2025-10-10 18:29:27.716 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 18.785s 2025-10-10 18:29:27.716 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 18.786s 2025-10-10 18:29:27.717 118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 18.791s 2025-10-10 18:29:27.722 119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 18.804s 2025-10-10 18:29:27.735 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node2 18.807s 2025-10-10 18:29:27.738 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-10-10T18:29:26.241536Z Next consensus number: 19 Legacy running event hash: 3339ddbbd71ac65ef371549d7751f2dcc03aec723e6971fbf8bdbef5bdccf8a984fe01a200a6c7e7cab3ce81ad09b421 Legacy running event mnemonic: grunt-emerge-broom-crowd Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 69440eda84b9dc50619a4436bbbcf47bce620b3e8eedad707b94da71bd66a97e55917c3187d4275fefad88502725cdf4 (root) ConsistencyTestingToolState / chuckle-swap-happy-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 net-aisle-midnight-between 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom
node2 18.844s 2025-10-10 18:29:27.775 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 18.845s 2025-10-10 18:29:27.776 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 18.845s 2025-10-10 18:29:27.776 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 18.846s 2025-10-10 18:29:27.777 118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 18.850s 2025-10-10 18:29:27.781 119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 18.869s 2025-10-10 18:29:27.800 118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node4 18.872s 2025-10-10 18:29:27.803 119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-10-10T18:29:26.241536Z Next consensus number: 19 Legacy running event hash: 3339ddbbd71ac65ef371549d7751f2dcc03aec723e6971fbf8bdbef5bdccf8a984fe01a200a6c7e7cab3ce81ad09b421 Legacy running event mnemonic: grunt-emerge-broom-crowd Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 69440eda84b9dc50619a4436bbbcf47bce620b3e8eedad707b94da71bd66a97e55917c3187d4275fefad88502725cdf4 (root) ConsistencyTestingToolState / chuckle-swap-happy-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 net-aisle-midnight-between 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom
node3 18.890s 2025-10-10 18:29:27.821 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node3 18.897s 2025-10-10 18:29:27.828 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-10-10T18:29:26.241536Z Next consensus number: 19 Legacy running event hash: 3339ddbbd71ac65ef371549d7751f2dcc03aec723e6971fbf8bdbef5bdccf8a984fe01a200a6c7e7cab3ce81ad09b421 Legacy running event mnemonic: grunt-emerge-broom-crowd Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 69440eda84b9dc50619a4436bbbcf47bce620b3e8eedad707b94da71bd66a97e55917c3187d4275fefad88502725cdf4 (root) ConsistencyTestingToolState / chuckle-swap-happy-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 net-aisle-midnight-between 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom
node4 18.907s 2025-10-10 18:29:27.838 120 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces
node4 18.907s 2025-10-10 18:29:27.838 121 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces
node4 18.907s 2025-10-10 18:29:27.838 122 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 18.908s 2025-10-10 18:29:27.839 123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 18.912s 2025-10-10 18:29:27.843 124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 18.946s 2025-10-10 18:29:27.877 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 18.947s 2025-10-10 18:29:27.878 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 18.947s 2025-10-10 18:29:27.878 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 18.949s 2025-10-10 18:29:27.880 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 18.954s 2025-10-10 18:29:27.885 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 19.269s 2025-10-10 18:29:28.200 130 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node3 20.807s 2025-10-10 18:29:29.738 171 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 1.5 s in CHECKING. Now in ACTIVE
node2 52.679s 2025-10-10 18:30:01.610 894 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 75 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 52.681s 2025-10-10 18:30:01.612 878 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 75 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 52.693s 2025-10-10 18:30:01.624 904 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 75 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 52.747s 2025-10-10 18:30:01.678 876 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 75 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 52.790s 2025-10-10 18:30:01.721 894 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 75 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 52.863s 2025-10-10 18:30:01.794 882 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 75 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/75
node3 52.864s 2025-10-10 18:30:01.795 883 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node4 52.868s 2025-10-10 18:30:01.799 900 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 75 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/75
node4 52.869s 2025-10-10 18:30:01.800 901 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node2 52.878s 2025-10-10 18:30:01.809 900 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 75 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/75
node2 52.879s 2025-10-10 18:30:01.810 901 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node4 52.952s 2025-10-10 18:30:01.883 939 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node4 52.954s 2025-10-10 18:30:01.885 940 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 75 Timestamp: 2025-10-10T18:30:00.052133Z Next consensus number: 2664 Legacy running event hash: dc4f8da0a1ce93523bc9d9d75a69e1f228b615d06ef93acf75248be01c36ffb85c656ea3578a07d226643ee952794f1c Legacy running event mnemonic: involve-equal-shock-noise Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -490213461 Root hash: 9f98d12bbed3c53b526952094cad1f71198e3414f69f2abe1ad1230fbdc54ee8a75ad6821c4ad37e4fe90709dace557c (root) ConsistencyTestingToolState / artist-survey-certain-flavor 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 swift-shrimp-giraffe-luggage 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -1195486452057392285 /3 okay-tired-silent-wire 4 StringLeaf 74 /4 seven-ozone-record-obscure
node2 52.960s 2025-10-10 18:30:01.891 939 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node3 52.961s 2025-10-10 18:30:01.892 921 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node2 52.963s 2025-10-10 18:30:01.894 940 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 75 Timestamp: 2025-10-10T18:30:00.052133Z Next consensus number: 2664 Legacy running event hash: dc4f8da0a1ce93523bc9d9d75a69e1f228b615d06ef93acf75248be01c36ffb85c656ea3578a07d226643ee952794f1c Legacy running event mnemonic: involve-equal-shock-noise Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -490213461 Root hash: 9f98d12bbed3c53b526952094cad1f71198e3414f69f2abe1ad1230fbdc54ee8a75ad6821c4ad37e4fe90709dace557c (root) ConsistencyTestingToolState / artist-survey-certain-flavor 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 swift-shrimp-giraffe-luggage 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -1195486452057392285 /3 okay-tired-silent-wire 4 StringLeaf 74 /4 seven-ozone-record-obscure
node4 52.963s 2025-10-10 18:30:01.894 941 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces
node4 52.963s 2025-10-10 18:30:01.894 942 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 48 File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces
node4 52.963s 2025-10-10 18:30:01.894 943 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 52.964s 2025-10-10 18:30:01.895 922 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 75 Timestamp: 2025-10-10T18:30:00.052133Z Next consensus number: 2664 Legacy running event hash: dc4f8da0a1ce93523bc9d9d75a69e1f228b615d06ef93acf75248be01c36ffb85c656ea3578a07d226643ee952794f1c Legacy running event mnemonic: involve-equal-shock-noise Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -490213461 Root hash: 9f98d12bbed3c53b526952094cad1f71198e3414f69f2abe1ad1230fbdc54ee8a75ad6821c4ad37e4fe90709dace557c (root) ConsistencyTestingToolState / artist-survey-certain-flavor 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 swift-shrimp-giraffe-luggage 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -1195486452057392285 /3 okay-tired-silent-wire 4 StringLeaf 74 /4 seven-ozone-record-obscure
node4 52.966s 2025-10-10 18:30:01.897 944 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 52.966s 2025-10-10 18:30:01.897 945 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 75 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/75 {"round":75,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/75/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 52.972s 2025-10-10 18:30:01.903 941 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 52.972s 2025-10-10 18:30:01.903 942 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 48 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 52.972s 2025-10-10 18:30:01.903 943 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 52.974s 2025-10-10 18:30:01.905 944 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 52.974s 2025-10-10 18:30:01.905 937 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 52.974s 2025-10-10 18:30:01.905 938 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 48 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 52.974s 2025-10-10 18:30:01.905 939 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 52.975s 2025-10-10 18:30:01.906 945 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 75 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/75 {"round":75,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/75/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 52.976s 2025-10-10 18:30:01.907 940 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 52.977s 2025-10-10 18:30:01.908 941 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 75 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/75 {"round":75,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/75/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 53.055s 2025-10-10 18:30:01.986 884 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 75 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/75
node0 53.056s 2025-10-10 18:30:01.987 910 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 75 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/75
node1 53.056s 2025-10-10 18:30:01.987 885 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node0 53.057s 2025-10-10 18:30:01.988 911 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node0 53.145s 2025-10-10 18:30:02.076 949 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node0 53.149s 2025-10-10 18:30:02.080 950 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 75 Timestamp: 2025-10-10T18:30:00.052133Z Next consensus number: 2664 Legacy running event hash: dc4f8da0a1ce93523bc9d9d75a69e1f228b615d06ef93acf75248be01c36ffb85c656ea3578a07d226643ee952794f1c Legacy running event mnemonic: involve-equal-shock-noise Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -490213461 Root hash: 9f98d12bbed3c53b526952094cad1f71198e3414f69f2abe1ad1230fbdc54ee8a75ad6821c4ad37e4fe90709dace557c (root) ConsistencyTestingToolState / artist-survey-certain-flavor 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 swift-shrimp-giraffe-luggage 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -1195486452057392285 /3 okay-tired-silent-wire 4 StringLeaf 74 /4 seven-ozone-record-obscure
node1 53.159s 2025-10-10 18:30:02.090 923 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 75
node0 53.160s 2025-10-10 18:30:02.091 951 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 53.161s 2025-10-10 18:30:02.092 952 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 48 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 53.161s 2025-10-10 18:30:02.092 953 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 53.162s 2025-10-10 18:30:02.093 924 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 75 Timestamp: 2025-10-10T18:30:00.052133Z Next consensus number: 2664 Legacy running event hash: dc4f8da0a1ce93523bc9d9d75a69e1f228b615d06ef93acf75248be01c36ffb85c656ea3578a07d226643ee952794f1c Legacy running event mnemonic: involve-equal-shock-noise Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -490213461 Root hash: 9f98d12bbed3c53b526952094cad1f71198e3414f69f2abe1ad1230fbdc54ee8a75ad6821c4ad37e4fe90709dace557c (root) ConsistencyTestingToolState / artist-survey-certain-flavor 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 swift-shrimp-giraffe-luggage 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -1195486452057392285 /3 okay-tired-silent-wire 4 StringLeaf 74 /4 seven-ozone-record-obscure
node0 53.163s 2025-10-10 18:30:02.094 954 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 53.164s 2025-10-10 18:30:02.095 955 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 75 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/75 {"round":75,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/75/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 53.173s 2025-10-10 18:30:02.104 925 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 53.174s 2025-10-10 18:30:02.105 926 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 48 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 53.174s 2025-10-10 18:30:02.105 927 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 53.176s 2025-10-10 18:30:02.107 928 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 53.177s 2025-10-10 18:30:02.108 929 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 75 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/75 {"round":75,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/75/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 52.611s 2025-10-10 18:31:01.542 2335 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 200 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 1m 52.637s 2025-10-10 18:31:01.568 2307 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 200 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 1m 52.667s 2025-10-10 18:31:01.598 2311 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 200 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 1m 52.722s 2025-10-10 18:31:01.653 2317 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 200 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 52.812s 2025-10-10 18:31:01.743 2301 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 200 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 52.843s 2025-10-10 18:31:01.774 2307 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 200 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/200
node3 1m 52.844s 2025-10-10 18:31:01.775 2308 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node2 1m 52.856s 2025-10-10 18:31:01.787 2317 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 200 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/200
node2 1m 52.857s 2025-10-10 18:31:01.788 2318 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node4 1m 52.886s 2025-10-10 18:31:01.817 2323 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 200 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/200
node4 1m 52.887s 2025-10-10 18:31:01.818 2324 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node0 1m 52.915s 2025-10-10 18:31:01.846 2341 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 200 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/200
node0 1m 52.915s 2025-10-10 18:31:01.846 2342 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node1 1m 52.927s 2025-10-10 18:31:01.858 2313 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 200 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/200
node1 1m 52.928s 2025-10-10 18:31:01.859 2314 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node3 1m 52.946s 2025-10-10 18:31:01.877 2344 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node2 1m 52.948s 2025-10-10 18:31:01.879 2362 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node3 1m 52.948s 2025-10-10 18:31:01.879 2345 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 200 Timestamp: 2025-10-10T18:31:00.131466Z Next consensus number: 7497 Legacy running event hash: d1b9fe7d259ef6d4609b6dbb0d9736d222c270e418806d3c3f864ddbc01050a7b093ae243516eaf8f6762c1cff3fcb9a Legacy running event mnemonic: similar-corn-record-cereal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2017413710 Root hash: 04ba6a02420e20d8883c6f53f3b30c63bd65268800b45f82ab90b6aa012dc9b04e85c0f02f8e9cad523f958aec969005 (root) ConsistencyTestingToolState / nuclear-doctor-dove-brand 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 vicious-lunar-razor-surface 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 2309093047419852702 /3 typical-comfort-tube-poverty 4 StringLeaf 199 /4 parrot-twice-sugar-bleak
node2 1m 52.950s 2025-10-10 18:31:01.881 2363 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 200 Timestamp: 2025-10-10T18:31:00.131466Z Next consensus number: 7497 Legacy running event hash: d1b9fe7d259ef6d4609b6dbb0d9736d222c270e418806d3c3f864ddbc01050a7b093ae243516eaf8f6762c1cff3fcb9a Legacy running event mnemonic: similar-corn-record-cereal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2017413710 Root hash: 04ba6a02420e20d8883c6f53f3b30c63bd65268800b45f82ab90b6aa012dc9b04e85c0f02f8e9cad523f958aec969005 (root) ConsistencyTestingToolState / nuclear-doctor-dove-brand 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 vicious-lunar-razor-surface 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 2309093047419852702 /3 typical-comfort-tube-poverty 4 StringLeaf 199 /4 parrot-twice-sugar-bleak
node3 1m 52.956s 2025-10-10 18:31:01.887 2346 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 52.956s 2025-10-10 18:31:01.887 2347 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 173 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 52.957s 2025-10-10 18:31:01.888 2348 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 1m 52.959s 2025-10-10 18:31:01.890 2364 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 52.960s 2025-10-10 18:31:01.891 2365 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 173 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 52.960s 2025-10-10 18:31:01.891 2366 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 1m 52.962s 2025-10-10 18:31:01.893 2349 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 1m 52.963s 2025-10-10 18:31:01.894 2350 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 200 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/200 {"round":200,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/200/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 1m 52.965s 2025-10-10 18:31:01.896 2367 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 1m 52.966s 2025-10-10 18:31:01.897 2368 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 200 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/200 {"round":200,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/200/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 1m 52.987s 2025-10-10 18:31:01.918 2360 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node4 1m 52.989s 2025-10-10 18:31:01.920 2361 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 200 Timestamp: 2025-10-10T18:31:00.131466Z Next consensus number: 7497 Legacy running event hash: d1b9fe7d259ef6d4609b6dbb0d9736d222c270e418806d3c3f864ddbc01050a7b093ae243516eaf8f6762c1cff3fcb9a Legacy running event mnemonic: similar-corn-record-cereal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2017413710 Root hash: 04ba6a02420e20d8883c6f53f3b30c63bd65268800b45f82ab90b6aa012dc9b04e85c0f02f8e9cad523f958aec969005 (root) ConsistencyTestingToolState / nuclear-doctor-dove-brand 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 vicious-lunar-razor-surface 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 2309093047419852702 /3 typical-comfort-tube-poverty 4 StringLeaf 199 /4 parrot-twice-sugar-bleak
node4 1m 52.997s 2025-10-10 18:31:01.928 2362 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 52.997s 2025-10-10 18:31:01.928 2363 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 173 File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 52.997s 2025-10-10 18:31:01.928 2364 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 1m 53.001s 2025-10-10 18:31:01.932 2386 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node0 1m 53.003s 2025-10-10 18:31:01.934 2387 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 200 Timestamp: 2025-10-10T18:31:00.131466Z Next consensus number: 7497 Legacy running event hash: d1b9fe7d259ef6d4609b6dbb0d9736d222c270e418806d3c3f864ddbc01050a7b093ae243516eaf8f6762c1cff3fcb9a Legacy running event mnemonic: similar-corn-record-cereal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2017413710 Root hash: 04ba6a02420e20d8883c6f53f3b30c63bd65268800b45f82ab90b6aa012dc9b04e85c0f02f8e9cad523f958aec969005 (root) ConsistencyTestingToolState / nuclear-doctor-dove-brand 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 vicious-lunar-razor-surface 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 2309093047419852702 /3 typical-comfort-tube-poverty 4 StringLeaf 199 /4 parrot-twice-sugar-bleak
node4 1m 53.003s 2025-10-10 18:31:01.934 2373 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 1m 53.003s 2025-10-10 18:31:01.934 2374 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 200 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/200 {"round":200,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/200/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 53.014s 2025-10-10 18:31:01.945 2388 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 53.015s 2025-10-10 18:31:01.946 2389 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 173 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 53.015s 2025-10-10 18:31:01.946 2390 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 1m 53.019s 2025-10-10 18:31:01.950 2350 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 200
node0 1m 53.021s 2025-10-10 18:31:01.952 2391 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 1m 53.021s 2025-10-10 18:31:01.952 2392 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 200 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/200 {"round":200,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/200/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 1m 53.021s 2025-10-10 18:31:01.952 2351 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 200 Timestamp: 2025-10-10T18:31:00.131466Z Next consensus number: 7497 Legacy running event hash: d1b9fe7d259ef6d4609b6dbb0d9736d222c270e418806d3c3f864ddbc01050a7b093ae243516eaf8f6762c1cff3fcb9a Legacy running event mnemonic: similar-corn-record-cereal Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2017413710 Root hash: 04ba6a02420e20d8883c6f53f3b30c63bd65268800b45f82ab90b6aa012dc9b04e85c0f02f8e9cad523f958aec969005 (root) ConsistencyTestingToolState / nuclear-doctor-dove-brand 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 vicious-lunar-razor-surface 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 2309093047419852702 /3 typical-comfort-tube-poverty 4 StringLeaf 199 /4 parrot-twice-sugar-bleak
node1 1m 53.029s 2025-10-10 18:31:01.960 2352 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 53.030s 2025-10-10 18:31:01.961 2353 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 173 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 53.030s 2025-10-10 18:31:01.961 2354 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 1m 53.035s 2025-10-10 18:31:01.966 2355 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 1m 53.036s 2025-10-10 18:31:01.967 2356 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 200 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/200 {"round":200,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/200/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 52.327s 2025-10-10 18:32:01.258 3768 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 329 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 2m 52.347s 2025-10-10 18:32:01.278 3738 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 329 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 2m 52.375s 2025-10-10 18:32:01.306 3754 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 329 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 2m 52.377s 2025-10-10 18:32:01.308 3756 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 329 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 2m 52.409s 2025-10-10 18:32:01.340 3778 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 329 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 2m 52.554s 2025-10-10 18:32:01.485 3759 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 329 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/329
node2 2m 52.554s 2025-10-10 18:32:01.485 3760 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node3 2m 52.659s 2025-10-10 18:32:01.590 3757 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 329 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/329
node3 2m 52.660s 2025-10-10 18:32:01.591 3758 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node1 2m 52.662s 2025-10-10 18:32:01.593 3741 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 329 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/329
node1 2m 52.663s 2025-10-10 18:32:01.594 3742 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node4 2m 52.688s 2025-10-10 18:32:01.619 3771 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 329 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/329
node4 2m 52.689s 2025-10-10 18:32:01.620 3772 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node2 2m 52.702s 2025-10-10 18:32:01.633 3799 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node2 2m 52.704s 2025-10-10 18:32:01.635 3800 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 329 Timestamp: 2025-10-10T18:32:00.314128929Z Next consensus number: 12251 Legacy running event hash: dbec02f377f6f738cacc5be758c675781abe512fc88e6e048a14eb9a2c51614b81e997a030f0880ece468bb852876629 Legacy running event mnemonic: little-fury-yard-boil Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1686091843 Root hash: 73888379f5f72b8993e8157d4ec32d5750860fd4f5f90f7c723ef6151f8975a36f929ea4164895fdb2d7b8720b1a4c47 (root) ConsistencyTestingToolState / illegal-movie-donkey-illness 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hard-return-concert-fold 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8099790215196787145 /3 retire-toddler-magnet-mass 4 StringLeaf 328 /4 leaf-viable-dream-inch
node2 2m 52.712s 2025-10-10 18:32:01.643 3801 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 52.712s 2025-10-10 18:32:01.643 3802 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 301 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 52.712s 2025-10-10 18:32:01.643 3803 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 2m 52.721s 2025-10-10 18:32:01.652 3804 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 2m 52.721s 2025-10-10 18:32:01.652 3805 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 329 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/329 {"round":329,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/329/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 2m 52.728s 2025-10-10 18:32:01.659 3781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 329 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/329
node0 2m 52.729s 2025-10-10 18:32:01.660 3782 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node1 2m 52.750s 2025-10-10 18:32:01.681 3773 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node1 2m 52.752s 2025-10-10 18:32:01.683 3774 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 329 Timestamp: 2025-10-10T18:32:00.314128929Z Next consensus number: 12251 Legacy running event hash: dbec02f377f6f738cacc5be758c675781abe512fc88e6e048a14eb9a2c51614b81e997a030f0880ece468bb852876629 Legacy running event mnemonic: little-fury-yard-boil Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1686091843 Root hash: 73888379f5f72b8993e8157d4ec32d5750860fd4f5f90f7c723ef6151f8975a36f929ea4164895fdb2d7b8720b1a4c47 (root) ConsistencyTestingToolState / illegal-movie-donkey-illness 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hard-return-concert-fold 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8099790215196787145 /3 retire-toddler-magnet-mass 4 StringLeaf 328 /4 leaf-viable-dream-inch
node1 2m 52.758s 2025-10-10 18:32:01.689 3775 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 52.759s 2025-10-10 18:32:01.690 3776 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 301 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 52.759s 2025-10-10 18:32:01.690 3777 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 2m 52.767s 2025-10-10 18:32:01.698 3797 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node1 2m 52.768s 2025-10-10 18:32:01.699 3781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 2m 52.768s 2025-10-10 18:32:01.699 3782 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 329 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/329 {"round":329,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/329/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 2m 52.770s 2025-10-10 18:32:01.701 3798 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 329 Timestamp: 2025-10-10T18:32:00.314128929Z Next consensus number: 12251 Legacy running event hash: dbec02f377f6f738cacc5be758c675781abe512fc88e6e048a14eb9a2c51614b81e997a030f0880ece468bb852876629 Legacy running event mnemonic: little-fury-yard-boil Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1686091843 Root hash: 73888379f5f72b8993e8157d4ec32d5750860fd4f5f90f7c723ef6151f8975a36f929ea4164895fdb2d7b8720b1a4c47 (root) ConsistencyTestingToolState / illegal-movie-donkey-illness 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hard-return-concert-fold 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8099790215196787145 /3 retire-toddler-magnet-mass 4 StringLeaf 328 /4 leaf-viable-dream-inch
node3 2m 52.778s 2025-10-10 18:32:01.709 3799 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 52.779s 2025-10-10 18:32:01.710 3800 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 301 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 52.779s 2025-10-10 18:32:01.710 3801 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 2m 52.780s 2025-10-10 18:32:01.711 3803 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node4 2m 52.783s 2025-10-10 18:32:01.714 3804 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 329 Timestamp: 2025-10-10T18:32:00.314128929Z Next consensus number: 12251 Legacy running event hash: dbec02f377f6f738cacc5be758c675781abe512fc88e6e048a14eb9a2c51614b81e997a030f0880ece468bb852876629 Legacy running event mnemonic: little-fury-yard-boil Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1686091843 Root hash: 73888379f5f72b8993e8157d4ec32d5750860fd4f5f90f7c723ef6151f8975a36f929ea4164895fdb2d7b8720b1a4c47 (root) ConsistencyTestingToolState / illegal-movie-donkey-illness 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hard-return-concert-fold 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8099790215196787145 /3 retire-toddler-magnet-mass 4 StringLeaf 328 /4 leaf-viable-dream-inch
node3 2m 52.789s 2025-10-10 18:32:01.720 3802 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 2m 52.790s 2025-10-10 18:32:01.721 3803 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 329 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/329 {"round":329,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/329/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 52.791s 2025-10-10 18:32:01.722 3805 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 52.791s 2025-10-10 18:32:01.722 3806 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 301 File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 52.791s 2025-10-10 18:32:01.722 3807 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 2m 52.799s 2025-10-10 18:32:01.730 3808 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 2m 52.800s 2025-10-10 18:32:01.731 3809 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 329 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/329 {"round":329,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/329/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 2m 52.819s 2025-10-10 18:32:01.750 3821 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 329
node0 2m 52.821s 2025-10-10 18:32:01.752 3822 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 329 Timestamp: 2025-10-10T18:32:00.314128929Z Next consensus number: 12251 Legacy running event hash: dbec02f377f6f738cacc5be758c675781abe512fc88e6e048a14eb9a2c51614b81e997a030f0880ece468bb852876629 Legacy running event mnemonic: little-fury-yard-boil Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1686091843 Root hash: 73888379f5f72b8993e8157d4ec32d5750860fd4f5f90f7c723ef6151f8975a36f929ea4164895fdb2d7b8720b1a4c47 (root) ConsistencyTestingToolState / illegal-movie-donkey-illness 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hard-return-concert-fold 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8099790215196787145 /3 retire-toddler-magnet-mass 4 StringLeaf 328 /4 leaf-viable-dream-inch
node0 2m 52.827s 2025-10-10 18:32:01.758 3823 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 52.828s 2025-10-10 18:32:01.759 3824 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 301 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 52.828s 2025-10-10 18:32:01.759 3825 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 2m 52.836s 2025-10-10 18:32:01.767 3826 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 2m 52.837s 2025-10-10 18:32:01.768 3827 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 329 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/329 {"round":329,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/329/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 3m 51.898s 2025-10-10 18:33:00.829 5249 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 464 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 3m 51.947s 2025-10-10 18:33:00.878 5287 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 464 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 3m 51.956s 2025-10-10 18:33:00.887 5305 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 464 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 3m 51.970s 2025-10-10 18:33:00.901 5291 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 464 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 3m 52.097s 2025-10-10 18:33:01.028 5290 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 464 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/464
node2 3m 52.098s 2025-10-10 18:33:01.029 5291 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 464
node0 3m 52.099s 2025-10-10 18:33:01.030 5294 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 464 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/464
node0 3m 52.099s 2025-10-10 18:33:01.030 5295 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 464
node1 3m 52.133s 2025-10-10 18:33:01.064 5254 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 464 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/464
node1 3m 52.134s 2025-10-10 18:33:01.065 5255 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 464
node3 3m 52.168s 2025-10-10 18:33:01.099 5308 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 464 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/464
node3 3m 52.169s 2025-10-10 18:33:01.100 5309 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 464
node2 3m 52.188s 2025-10-10 18:33:01.119 5324 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 464
node0 3m 52.189s 2025-10-10 18:33:01.120 5342 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 464
node2 3m 52.189s 2025-10-10 18:33:01.120 5325 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 464 Timestamp: 2025-10-10T18:33:00.001247Z Next consensus number: 16034 Legacy running event hash: abcd784c428b93e37999649ad1f53cad8200a9526d0cc0d8c99a17417561bcbec704c20dcf2184db5210d7f63d902c8f Legacy running event mnemonic: major-another-fabric-lonely Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1981440566 Root hash: e2b7a62d7c0ff693d017210e64278e317e727e9e10d35c883281234339c0100878979c01b8e85abf8651a8bd3f78a683 (root) ConsistencyTestingToolState / powder-denial-limit-color 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 dove-office-voice-metal 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -7127992840690510539 /3 toward-grant-empty-grain 4 StringLeaf 463 /4 super-vanish-swim-crouch
node0 3m 52.191s 2025-10-10 18:33:01.122 5343 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 464 Timestamp: 2025-10-10T18:33:00.001247Z Next consensus number: 16034 Legacy running event hash: abcd784c428b93e37999649ad1f53cad8200a9526d0cc0d8c99a17417561bcbec704c20dcf2184db5210d7f63d902c8f Legacy running event mnemonic: major-another-fabric-lonely Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1981440566 Root hash: e2b7a62d7c0ff693d017210e64278e317e727e9e10d35c883281234339c0100878979c01b8e85abf8651a8bd3f78a683 (root) ConsistencyTestingToolState / powder-denial-limit-color 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 dove-office-voice-metal 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -7127992840690510539 /3 toward-grant-empty-grain 4 StringLeaf 463 /4 super-vanish-swim-crouch
node0 3m 52.197s 2025-10-10 18:33:01.128 5344 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 52.198s 2025-10-10 18:33:01.129 5345 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 437 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 52.198s 2025-10-10 18:33:01.129 5346 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 3m 52.198s 2025-10-10 18:33:01.129 5326 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 52.198s 2025-10-10 18:33:01.129 5327 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 437 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 52.198s 2025-10-10 18:33:01.129 5328 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 3m 52.209s 2025-10-10 18:33:01.140 5347 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 3m 52.209s 2025-10-10 18:33:01.140 5348 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 464 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/464 {"round":464,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/464/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 3m 52.209s 2025-10-10 18:33:01.140 5329 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 3m 52.210s 2025-10-10 18:33:01.141 5330 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 464 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/464 {"round":464,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/464/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 3m 52.269s 2025-10-10 18:33:01.200 5342 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 464
node3 3m 52.271s 2025-10-10 18:33:01.202 5343 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 464 Timestamp: 2025-10-10T18:33:00.001247Z Next consensus number: 16034 Legacy running event hash: abcd784c428b93e37999649ad1f53cad8200a9526d0cc0d8c99a17417561bcbec704c20dcf2184db5210d7f63d902c8f Legacy running event mnemonic: major-another-fabric-lonely Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1981440566 Root hash: e2b7a62d7c0ff693d017210e64278e317e727e9e10d35c883281234339c0100878979c01b8e85abf8651a8bd3f78a683 (root) ConsistencyTestingToolState / powder-denial-limit-color 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 dove-office-voice-metal 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -7127992840690510539 /3 toward-grant-empty-grain 4 StringLeaf 463 /4 super-vanish-swim-crouch
node1 3m 52.278s 2025-10-10 18:33:01.209 5302 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 464
node1 3m 52.280s 2025-10-10 18:33:01.211 5303 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 464 Timestamp: 2025-10-10T18:33:00.001247Z Next consensus number: 16034 Legacy running event hash: abcd784c428b93e37999649ad1f53cad8200a9526d0cc0d8c99a17417561bcbec704c20dcf2184db5210d7f63d902c8f Legacy running event mnemonic: major-another-fabric-lonely Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1981440566 Root hash: e2b7a62d7c0ff693d017210e64278e317e727e9e10d35c883281234339c0100878979c01b8e85abf8651a8bd3f78a683 (root) ConsistencyTestingToolState / powder-denial-limit-color 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 dove-office-voice-metal 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -7127992840690510539 /3 toward-grant-empty-grain 4 StringLeaf 463 /4 super-vanish-swim-crouch
node3 3m 52.281s 2025-10-10 18:33:01.212 5344 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 52.281s 2025-10-10 18:33:01.212 5345 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 437 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 52.282s 2025-10-10 18:33:01.213 5346 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 3m 52.289s 2025-10-10 18:33:01.220 5304 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 52.289s 2025-10-10 18:33:01.220 5305 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 437 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 52.289s 2025-10-10 18:33:01.220 5306 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 3m 52.293s 2025-10-10 18:33:01.224 5347 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 3m 52.294s 2025-10-10 18:33:01.225 5348 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 464 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/464 {"round":464,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/464/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 3m 52.301s 2025-10-10 18:33:01.232 5307 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 3m 52.301s 2025-10-10 18:33:01.232 5308 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 464 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/464 {"round":464,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/464/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 4m 52.166s 2025-10-10 18:34:01.097 6884 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 603 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 4m 52.166s 2025-10-10 18:34:01.097 6904 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 603 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 4m 52.252s 2025-10-10 18:34:01.183 6878 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 603 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 4m 52.317s 2025-10-10 18:34:01.248 6868 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 603 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 4m 52.411s 2025-10-10 18:34:01.342 6871 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 603 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/603
node2 4m 52.411s 2025-10-10 18:34:01.342 6907 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 603 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/603
node0 4m 52.412s 2025-10-10 18:34:01.343 6872 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 603
node2 4m 52.412s 2025-10-10 18:34:01.343 6908 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 603
node1 4m 52.468s 2025-10-10 18:34:01.399 6887 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 603 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/603
node1 4m 52.469s 2025-10-10 18:34:01.400 6888 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 603
node3 4m 52.482s 2025-10-10 18:34:01.413 6881 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 603 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/603
node3 4m 52.483s 2025-10-10 18:34:01.414 6882 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 603
node0 4m 52.497s 2025-10-10 18:34:01.428 6903 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 603
node2 4m 52.498s 2025-10-10 18:34:01.429 6947 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 603
node0 4m 52.499s 2025-10-10 18:34:01.430 6904 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 603 Timestamp: 2025-10-10T18:34:00.319099Z Next consensus number: 19305 Legacy running event hash: 8a6ef57002a63e1ad196320f5fc117b52eb6b4afed6027376f3a1732d98bb3c5f0eb9ad083ccd43accff5f855c5550b0 Legacy running event mnemonic: rather-faith-feel-select Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1897502935 Root hash: 762b6b33bb5259c0348bdb6d8736de74a222a8cb11ea2d6503e8750d72458bf1ad85b605b7c47f49535027f2d337efe5 (root) ConsistencyTestingToolState / shell-grid-noise-day 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 quote-excess-pumpkin-scan 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -4179402605894717636 /3 ticket-weasel-creek-boss 4 StringLeaf 602 /4 effort-fiscal-clip-security
node2 4m 52.499s 2025-10-10 18:34:01.430 6948 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 603 Timestamp: 2025-10-10T18:34:00.319099Z Next consensus number: 19305 Legacy running event hash: 8a6ef57002a63e1ad196320f5fc117b52eb6b4afed6027376f3a1732d98bb3c5f0eb9ad083ccd43accff5f855c5550b0 Legacy running event mnemonic: rather-faith-feel-select Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1897502935 Root hash: 762b6b33bb5259c0348bdb6d8736de74a222a8cb11ea2d6503e8750d72458bf1ad85b605b7c47f49535027f2d337efe5 (root) ConsistencyTestingToolState / shell-grid-noise-day 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 quote-excess-pumpkin-scan 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -4179402605894717636 /3 ticket-weasel-creek-boss 4 StringLeaf 602 /4 effort-fiscal-clip-security
node0 4m 52.505s 2025-10-10 18:34:01.436 6905 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+33+17.050224094Z_seq1_minr473_maxr5473_orgn0.pces
node0 4m 52.505s 2025-10-10 18:34:01.436 6906 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 576 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+33+17.050224094Z_seq1_minr473_maxr5473_orgn0.pces
node0 4m 52.505s 2025-10-10 18:34:01.436 6907 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 4m 52.507s 2025-10-10 18:34:01.438 6908 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 4m 52.507s 2025-10-10 18:34:01.438 6949 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+33+17.026077598Z_seq1_minr473_maxr5473_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 4m 52.507s 2025-10-10 18:34:01.438 6950 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 576 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+33+17.026077598Z_seq1_minr473_maxr5473_orgn0.pces
node2 4m 52.507s 2025-10-10 18:34:01.438 6951 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 4m 52.508s 2025-10-10 18:34:01.439 6909 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 603 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/603 {"round":603,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/603/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 4m 52.509s 2025-10-10 18:34:01.440 6952 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 4m 52.509s 2025-10-10 18:34:01.440 6953 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 603 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/603 {"round":603,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/603/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 4m 52.510s 2025-10-10 18:34:01.441 6910 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2
node2 4m 52.511s 2025-10-10 18:34:01.442 6954 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2
node1 4m 52.566s 2025-10-10 18:34:01.497 6919 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 603
node1 4m 52.568s 2025-10-10 18:34:01.499 6920 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 603 Timestamp: 2025-10-10T18:34:00.319099Z Next consensus number: 19305 Legacy running event hash: 8a6ef57002a63e1ad196320f5fc117b52eb6b4afed6027376f3a1732d98bb3c5f0eb9ad083ccd43accff5f855c5550b0 Legacy running event mnemonic: rather-faith-feel-select Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1897502935 Root hash: 762b6b33bb5259c0348bdb6d8736de74a222a8cb11ea2d6503e8750d72458bf1ad85b605b7c47f49535027f2d337efe5 (root) ConsistencyTestingToolState / shell-grid-noise-day 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 quote-excess-pumpkin-scan 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -4179402605894717636 /3 ticket-weasel-creek-boss 4 StringLeaf 602 /4 effort-fiscal-clip-security
node1 4m 52.578s 2025-10-10 18:34:01.509 6921 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+33+16.935675264Z_seq1_minr474_maxr5474_orgn0.pces
node1 4m 52.579s 2025-10-10 18:34:01.510 6922 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 576 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+33+16.935675264Z_seq1_minr474_maxr5474_orgn0.pces
node1 4m 52.579s 2025-10-10 18:34:01.510 6923 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 4m 52.581s 2025-10-10 18:34:01.512 6924 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 4m 52.581s 2025-10-10 18:34:01.512 6925 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 603 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/603 {"round":603,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/603/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 4m 52.583s 2025-10-10 18:34:01.514 6926 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2
node3 4m 52.587s 2025-10-10 18:34:01.518 6921 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 603
node3 4m 52.590s 2025-10-10 18:34:01.521 6922 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 603 Timestamp: 2025-10-10T18:34:00.319099Z Next consensus number: 19305 Legacy running event hash: 8a6ef57002a63e1ad196320f5fc117b52eb6b4afed6027376f3a1732d98bb3c5f0eb9ad083ccd43accff5f855c5550b0 Legacy running event mnemonic: rather-faith-feel-select Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1897502935 Root hash: 762b6b33bb5259c0348bdb6d8736de74a222a8cb11ea2d6503e8750d72458bf1ad85b605b7c47f49535027f2d337efe5 (root) ConsistencyTestingToolState / shell-grid-noise-day 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 quote-excess-pumpkin-scan 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -4179402605894717636 /3 ticket-weasel-creek-boss 4 StringLeaf 602 /4 effort-fiscal-clip-security
node3 4m 52.598s 2025-10-10 18:34:01.529 6923 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+33+17.037369231Z_seq1_minr473_maxr5473_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 4m 52.598s 2025-10-10 18:34:01.529 6924 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 576 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+33+17.037369231Z_seq1_minr473_maxr5473_orgn0.pces
node3 4m 52.598s 2025-10-10 18:34:01.529 6925 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 4m 52.600s 2025-10-10 18:34:01.531 6926 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 4m 52.601s 2025-10-10 18:34:01.532 6927 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 603 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/603 {"round":603,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/603/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 4m 52.603s 2025-10-10 18:34:01.534 6928 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2
node4 5m 50.179s 2025-10-10 18:34:59.110 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 5m 50.266s 2025-10-10 18:34:59.197 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 5m 50.281s 2025-10-10 18:34:59.212 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 50.390s 2025-10-10 18:34:59.321 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 5m 50.395s 2025-10-10 18:34:59.326 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node4 5m 50.408s 2025-10-10 18:34:59.339 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 5m 50.826s 2025-10-10 18:34:59.757 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node4 5m 50.827s 2025-10-10 18:34:59.758 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 5m 51.715s 2025-10-10 18:35:00.646 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 887ms
node4 5m 51.728s 2025-10-10 18:35:00.659 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 5m 51.731s 2025-10-10 18:35:00.662 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 51.771s 2025-10-10 18:35:00.702 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 5m 51.828s 2025-10-10 18:35:00.759 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 5m 51.829s 2025-10-10 18:35:00.760 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node2 5m 52.106s 2025-10-10 18:35:01.037 8499 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 741 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 5m 52.138s 2025-10-10 18:35:01.069 8457 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 741 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 5m 52.150s 2025-10-10 18:35:01.081 8457 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 741 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 5m 52.165s 2025-10-10 18:35:01.096 8439 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 741 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 5m 52.302s 2025-10-10 18:35:01.233 8460 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 741 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/741
node1 5m 52.303s 2025-10-10 18:35:01.234 8461 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 741
node3 5m 52.361s 2025-10-10 18:35:01.292 8442 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 741 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/741
node3 5m 52.362s 2025-10-10 18:35:01.293 8443 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 741
node2 5m 52.363s 2025-10-10 18:35:01.294 8502 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 741 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/741
node2 5m 52.364s 2025-10-10 18:35:01.295 8503 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 741
node1 5m 52.404s 2025-10-10 18:35:01.335 8492 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 741
node1 5m 52.406s 2025-10-10 18:35:01.337 8493 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 741 Timestamp: 2025-10-10T18:35:00.121503175Z Next consensus number: 22603 Legacy running event hash: 4738a802d3128d35d6555b5402986a83061ec1875f6bf0e24c7e5a1115b7e286dac009c27121641d3c986df269184ae6 Legacy running event mnemonic: train-zone-oven-floor Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1214152440 Root hash: 9c3f75cf8518835b525003002c3b6eb9bd96aafc67691f1db1121741c72229c7ab2766360cfe7997dcae9b8c31d2fcfc (root) ConsistencyTestingToolState / join-rigid-fringe-net 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 double-chair-tongue-ordinary 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8011541529760961293 /3 mother-home-shrug-consider 4 StringLeaf 740 /4 sea-bamboo-return-divorce
node1 5m 52.414s 2025-10-10 18:35:01.345 8494 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+33+16.935675264Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 52.414s 2025-10-10 18:35:01.345 8495 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 714 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+33+16.935675264Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 52.414s 2025-10-10 18:35:01.345 8496 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 5m 52.419s 2025-10-10 18:35:01.350 8497 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 5m 52.419s 2025-10-10 18:35:01.350 8498 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 741 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/741 {"round":741,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/741/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 5m 52.421s 2025-10-10 18:35:01.352 8499 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/75
node2 5m 52.449s 2025-10-10 18:35:01.380 8534 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 741
node2 5m 52.451s 2025-10-10 18:35:01.382 8535 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 741 Timestamp: 2025-10-10T18:35:00.121503175Z Next consensus number: 22603 Legacy running event hash: 4738a802d3128d35d6555b5402986a83061ec1875f6bf0e24c7e5a1115b7e286dac009c27121641d3c986df269184ae6 Legacy running event mnemonic: train-zone-oven-floor Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1214152440 Root hash: 9c3f75cf8518835b525003002c3b6eb9bd96aafc67691f1db1121741c72229c7ab2766360cfe7997dcae9b8c31d2fcfc (root) ConsistencyTestingToolState / join-rigid-fringe-net 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 double-chair-tongue-ordinary 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8011541529760961293 /3 mother-home-shrug-consider 4 StringLeaf 740 /4 sea-bamboo-return-divorce
node2 5m 52.457s 2025-10-10 18:35:01.388 8536 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+33+17.026077598Z_seq1_minr473_maxr5473_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 5m 52.458s 2025-10-10 18:35:01.389 8537 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 714 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+33+17.026077598Z_seq1_minr473_maxr5473_orgn0.pces
node2 5m 52.458s 2025-10-10 18:35:01.389 8538 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 5m 52.462s 2025-10-10 18:35:01.393 8539 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 5m 52.462s 2025-10-10 18:35:01.393 8540 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 741 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/741 {"round":741,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/741/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 5m 52.464s 2025-10-10 18:35:01.395 8541 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/75
node3 5m 52.468s 2025-10-10 18:35:01.399 8482 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 741
node3 5m 52.471s 2025-10-10 18:35:01.402 8483 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 741 Timestamp: 2025-10-10T18:35:00.121503175Z Next consensus number: 22603 Legacy running event hash: 4738a802d3128d35d6555b5402986a83061ec1875f6bf0e24c7e5a1115b7e286dac009c27121641d3c986df269184ae6 Legacy running event mnemonic: train-zone-oven-floor Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1214152440 Root hash: 9c3f75cf8518835b525003002c3b6eb9bd96aafc67691f1db1121741c72229c7ab2766360cfe7997dcae9b8c31d2fcfc (root) ConsistencyTestingToolState / join-rigid-fringe-net 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 double-chair-tongue-ordinary 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8011541529760961293 /3 mother-home-shrug-consider 4 StringLeaf 740 /4 sea-bamboo-return-divorce
node3 5m 52.479s 2025-10-10 18:35:01.410 8484 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+33+17.037369231Z_seq1_minr473_maxr5473_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 5m 52.480s 2025-10-10 18:35:01.411 8485 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 714 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+33+17.037369231Z_seq1_minr473_maxr5473_orgn0.pces
node3 5m 52.480s 2025-10-10 18:35:01.411 8486 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 5m 52.485s 2025-10-10 18:35:01.416 8487 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 5m 52.486s 2025-10-10 18:35:01.417 8488 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 741 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/741 {"round":741,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/741/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 5m 52.488s 2025-10-10 18:35:01.419 8489 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/75
node0 5m 52.513s 2025-10-10 18:35:01.444 8470 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 741 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/741
node0 5m 52.514s 2025-10-10 18:35:01.445 8471 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 741
node0 5m 52.592s 2025-10-10 18:35:01.523 8518 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 741
node0 5m 52.594s 2025-10-10 18:35:01.525 8519 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 741 Timestamp: 2025-10-10T18:35:00.121503175Z Next consensus number: 22603 Legacy running event hash: 4738a802d3128d35d6555b5402986a83061ec1875f6bf0e24c7e5a1115b7e286dac009c27121641d3c986df269184ae6 Legacy running event mnemonic: train-zone-oven-floor Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1214152440 Root hash: 9c3f75cf8518835b525003002c3b6eb9bd96aafc67691f1db1121741c72229c7ab2766360cfe7997dcae9b8c31d2fcfc (root) ConsistencyTestingToolState / join-rigid-fringe-net 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 double-chair-tongue-ordinary 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 8011541529760961293 /3 mother-home-shrug-consider 4 StringLeaf 740 /4 sea-bamboo-return-divorce
node0 5m 52.599s 2025-10-10 18:35:01.530 8520 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+33+17.050224094Z_seq1_minr473_maxr5473_orgn0.pces
node0 5m 52.600s 2025-10-10 18:35:01.531 8521 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 714 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+33+17.050224094Z_seq1_minr473_maxr5473_orgn0.pces
node0 5m 52.600s 2025-10-10 18:35:01.531 8522 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 5m 52.604s 2025-10-10 18:35:01.535 8523 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 5m 52.604s 2025-10-10 18:35:01.535 8524 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 741 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/741 {"round":741,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/741/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 5m 52.606s 2025-10-10 18:35:01.537 8525 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/75
node4 5m 53.825s 2025-10-10 18:35:02.756 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 5m 53.918s 2025-10-10 18:35:02.849 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 53.925s 2025-10-10 18:35:02.856 21 INFO STARTUP <main> StartupStateUtils: The following saved states were found on disk:
- /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/329/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/200/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/75/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2/SignedState.swh
node4 5m 53.925s 2025-10-10 18:35:02.856 22 INFO STARTUP <main> StartupStateUtils: Loading latest state from disk.
node4 5m 53.926s 2025-10-10 18:35:02.857 23 INFO STARTUP <main> StartupStateUtils: Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/329/SignedState.swh
node4 5m 53.930s 2025-10-10 18:35:02.861 24 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 5m 53.934s 2025-10-10 18:35:02.865 25 INFO STATE_TO_DISK <main> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp
node4 5m 54.065s 2025-10-10 18:35:02.996 36 INFO STARTUP <main> StartupStateUtils: Loaded state's hash is the same as when it was saved.
node4 5m 54.068s 2025-10-10 18:35:02.999 37 INFO STARTUP <main> StartupStateUtils: Platform has loaded a saved state {"round":329,"consensusTimestamp":"2025-10-10T18:32:00.314128929Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload]
node4 5m 54.070s 2025-10-10 18:35:03.001 40 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 54.077s 2025-10-10 18:35:03.008 43 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 5m 54.079s 2025-10-10 18:35:03.010 44 INFO STARTUP <main> AddressBookInitializer: Using the loaded state's address book and weight values.
node4 5m 54.086s 2025-10-10 18:35:03.017 45 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 54.087s 2025-10-10 18:35:03.018 46 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 55.117s 2025-10-10 18:35:04.048 47 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26394810] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=208190, randomLong=-9139655936804868236, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=7089, randomLong=6808295989286899138, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1053119, data=35, exception=null] OS Health Check Report - Complete (took 1018 ms)
node4 5m 55.144s 2025-10-10 18:35:04.075 48 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 5m 55.268s 2025-10-10 18:35:04.199 49 INFO STARTUP <main> PcesUtilities: Span compaction completed for data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 370
node4 5m 55.271s 2025-10-10 18:35:04.202 50 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 5m 55.274s 2025-10-10 18:35:04.205 51 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 5m 55.347s 2025-10-10 18:35:04.278 52 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IoSf6w==", "port": 30124 }, { "ipAddressV4": "CoAASw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I9/mVQ==", "port": 30125 }, { "ipAddressV4": "CoAAVg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "aMWZ6w==", "port": 30126 }, { "ipAddressV4": "CoAAUg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "Iht2xw==", "port": 30127 }, { "ipAddressV4": "CoAAUQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+97Mw==", "port": 30128 }, { "ipAddressV4": "CoAATg==", "port": 30128 }] }] }
node4 5m 55.367s 2025-10-10 18:35:04.298 53 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with state long -8099790215196787145.
node4 5m 55.368s 2025-10-10 18:35:04.299 54 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with 328 rounds handled.
node4 5m 55.368s 2025-10-10 18:35:04.299 55 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 5m 55.368s 2025-10-10 18:35:04.299 56 INFO STARTUP <main> TransactionHandlingHistory: Log file found. Parsing previous history
node4 5m 56.103s 2025-10-10 18:35:05.034 57 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 329 Timestamp: 2025-10-10T18:32:00.314128929Z Next consensus number: 12251 Legacy running event hash: dbec02f377f6f738cacc5be758c675781abe512fc88e6e048a14eb9a2c51614b81e997a030f0880ece468bb852876629 Legacy running event mnemonic: little-fury-yard-boil Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1686091843 Root hash: 73888379f5f72b8993e8157d4ec32d5750860fd4f5f90f7c723ef6151f8975a36f929ea4164895fdb2d7b8720b1a4c47 (root) ConsistencyTestingToolState / illegal-movie-donkey-illness 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hard-return-concert-fold 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8099790215196787145 /3 retire-toddler-magnet-mass 4 StringLeaf 328 /4 leaf-viable-dream-inch
node4 5m 56.335s 2025-10-10 18:35:05.266 59 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: dbec02f377f6f738cacc5be758c675781abe512fc88e6e048a14eb9a2c51614b81e997a030f0880ece468bb852876629
node4 5m 56.347s 2025-10-10 18:35:05.278 60 INFO STARTUP <platformForkJoinThread-4> Shadowgraph: Shadowgraph starting from expiration threshold 301
node4 5m 56.356s 2025-10-10 18:35:05.287 62 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 5m 56.357s 2025-10-10 18:35:05.288 63 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 5m 56.358s 2025-10-10 18:35:05.289 64 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 5m 56.362s 2025-10-10 18:35:05.293 65 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 5m 56.363s 2025-10-10 18:35:05.294 66 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 5m 56.364s 2025-10-10 18:35:05.295 67 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 5m 56.366s 2025-10-10 18:35:05.297 68 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 301
node4 5m 56.374s 2025-10-10 18:35:05.305 69 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 187.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 5m 56.680s 2025-10-10 18:35:05.611 70 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:4 H:aefe9925df38 BR:327), num remaining: 4
node4 5m 56.681s 2025-10-10 18:35:05.612 71 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:305a9b2cd8a1 BR:327), num remaining: 3
node4 5m 56.682s 2025-10-10 18:35:05.613 72 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:e877e0175e26 BR:327), num remaining: 2
node4 5m 56.683s 2025-10-10 18:35:05.614 73 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:9c43804b0f7f BR:327), num remaining: 1
node4 5m 56.683s 2025-10-10 18:35:05.614 74 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:4554227a2391 BR:327), num remaining: 0
node4 5m 57.001s 2025-10-10 18:35:05.932 411 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 2,566 preconsensus events with max birth round 370. These events contained 3,582 transactions. 40 rounds reached consensus spanning 18.5 seconds of consensus time. The latest round to reach consensus is round 369. Replay took 634.0 milliseconds.
node4 5m 57.005s 2025-10-10 18:35:05.936 412 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 5m 57.005s 2025-10-10 18:35:05.936 413 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 628.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 5m 57.931s 2025-10-10 18:35:06.862 442 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301] remote ev=EventWindow[latestConsensusRound=754,ancientThreshold=727,expiredThreshold=653]
node0 5m 58.002s 2025-10-10 18:35:06.933 8669 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=754,ancientThreshold=727,expiredThreshold=653] remote ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301]
node1 5m 58.002s 2025-10-10 18:35:06.933 8643 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=754,ancientThreshold=727,expiredThreshold=653] remote ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301]
node2 5m 58.002s 2025-10-10 18:35:06.933 8685 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=754,ancientThreshold=727,expiredThreshold=653] remote ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301]
node3 5m 58.002s 2025-10-10 18:35:06.933 8625 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=754,ancientThreshold=727,expiredThreshold=653] remote ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301]
node4 5m 58.071s 2025-10-10 18:35:07.002 443 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301] remote ev=EventWindow[latestConsensusRound=754,ancientThreshold=727,expiredThreshold=653]
node4 5m 58.071s 2025-10-10 18:35:07.002 444 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301] remote ev=EventWindow[latestConsensusRound=754,ancientThreshold=727,expiredThreshold=653]
node4 5m 58.071s 2025-10-10 18:35:07.002 445 INFO RECONNECT <<platform-core: SyncProtocolWith3 4 to 3>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301] remote ev=EventWindow[latestConsensusRound=754,ancientThreshold=727,expiredThreshold=653]
node4 5m 58.072s 2025-10-10 18:35:07.003 446 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 1.1 s in OBSERVING. Now in BEHIND
node4 5m 58.073s 2025-10-10 18:35:07.004 447 INFO RECONNECT <platformForkJoinThread-2> ReconnectController: Starting ReconnectController
node4 5m 58.073s 2025-10-10 18:35:07.004 448 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Preparing for reconnect, stopping gossip
node4 5m 58.225s 2025-10-10 18:35:07.156 449 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Preparing for reconnect, start clearing queues
node4 5m 58.227s 2025-10-10 18:35:07.158 450 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Queues have been cleared
node4 5m 58.228s 2025-10-10 18:35:07.159 451 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectController: waiting for reconnect connection
node4 5m 58.228s 2025-10-10 18:35:07.159 452 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectController: acquired reconnect connection
node3 5m 58.316s 2025-10-10 18:35:07.247 8637 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> ReconnectTeacher: Starting reconnect in the role of the sender {"receiving":false,"nodeId":3,"otherNodeId":4,"round":754} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node3 5m 58.317s 2025-10-10 18:35:07.248 8638 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> ReconnectTeacher: The following state will be sent to the learner:
Round: 754 Timestamp: 2025-10-10T18:35:05.841282390Z Next consensus number: 22916 Legacy running event hash: f0b4b6b9ba289d16f278fdc8c144c859f006302bfe46473218aa658636bedb65dc3102165f456d11a753796e657fec5a Legacy running event mnemonic: trumpet-cigar-roast-front Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -794697419 Root hash: f969fee886310b3ae0491b8ec95335d26d6bf750f70cc27dfd0d8b0b68ca2101e5dbb5066f06174229c1711fe5b6fe09 (root) ConsistencyTestingToolState / spray-cart-delay-guard 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 ostrich-busy-staff-garbage 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 2298158524733737378 /3 matrix-pink-crawl-penalty 4 StringLeaf 753 /4 where-version-civil-grape
node3 5m 58.317s 2025-10-10 18:35:07.248 8639 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> ReconnectTeacher: Sending signatures from nodes 0, 1, 3 (signing weight = 37500000000/50000000000) for state hash f969fee886310b3ae0491b8ec95335d26d6bf750f70cc27dfd0d8b0b68ca2101e5dbb5066f06174229c1711fe5b6fe09
node3 5m 58.317s 2025-10-10 18:35:07.248 8640 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> ReconnectTeacher: Starting synchronization in the role of the sender.
node3 5m 58.326s 2025-10-10 18:35:07.257 8641 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node3 5m 58.339s 2025-10-10 18:35:07.270 8642 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@49277577 start run()
node4 5m 58.384s 2025-10-10 18:35:07.315 453 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":3,"round":369} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node4 5m 58.386s 2025-10-10 18:35:07.317 454 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Receiving signed state signatures
node4 5m 58.388s 2025-10-10 18:35:07.319 455 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Received signatures from nodes 0, 1, 3
node4 5m 58.390s 2025-10-10 18:35:07.321 456 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls receiveTree()
node4 5m 58.391s 2025-10-10 18:35:07.322 457 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: synchronizing tree
node4 5m 58.391s 2025-10-10 18:35:07.322 458 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: receiving tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node4 5m 58.397s 2025-10-10 18:35:07.328 459 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@122929ad start run()
node4 5m 58.411s 2025-10-10 18:35:07.342 460 INFO STARTUP <<work group learning-synchronizer: async-input-stream #0>> ConsistencyTestingToolState: New State Constructed.
node3 5m 58.492s 2025-10-10 18:35:07.423 8661 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@49277577 finish run()
node3 5m 58.493s 2025-10-10 18:35:07.424 8662 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> TeachingSynchronizer: finished sending tree
node3 5m 58.494s 2025-10-10 18:35:07.425 8663 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.virtualmap.VirtualMap with route [2]
node3 5m 58.495s 2025-10-10 18:35:07.426 8664 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@409d82dc start run()
node4 5m 58.611s 2025-10-10 18:35:07.542 482 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 5m 58.612s 2025-10-10 18:35:07.543 483 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 5m 58.612s 2025-10-10 18:35:07.543 484 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@122929ad finish run()
node4 5m 58.613s 2025-10-10 18:35:07.544 485 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: received tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node4 5m 58.613s 2025-10-10 18:35:07.544 486 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route [2]
node4 5m 58.616s 2025-10-10 18:35:07.547 487 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@6129fc43 start run()
node4 5m 58.675s 2025-10-10 18:35:07.606 488 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): firstLeafPath: 1 -> 1, lastLeafPath: 1 -> 1
node4 5m 58.676s 2025-10-10 18:35:07.607 489 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): done
node4 5m 58.678s 2025-10-10 18:35:07.609 490 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 5m 58.679s 2025-10-10 18:35:07.610 491 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call nodeRemover.allNodesReceived()
node4 5m 58.679s 2025-10-10 18:35:07.610 492 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived()
node4 5m 58.679s 2025-10-10 18:35:07.610 493 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived(): done
node4 5m 58.679s 2025-10-10 18:35:07.610 494 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call root.endLearnerReconnect()
node4 5m 58.679s 2025-10-10 18:35:07.610 495 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call reconnectIterator.close()
node4 5m 58.680s 2025-10-10 18:35:07.611 496 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call setHashPrivate()
node3 5m 58.749s 2025-10-10 18:35:07.680 8668 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@409d82dc finish run()
node3 5m 58.749s 2025-10-10 18:35:07.680 8669 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> TeachingSynchronizer: finished sending tree
node3 5m 58.752s 2025-10-10 18:35:07.683 8672 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> ReconnectTeacher: Finished synchronization in the role of the sender.
node4 5m 58.836s 2025-10-10 18:35:07.767 506 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call postInit()
node4 5m 58.837s 2025-10-10 18:35:07.768 508 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: endLearnerReconnect() complete
node4 5m 58.837s 2025-10-10 18:35:07.768 509 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: close() complete
node4 5m 58.837s 2025-10-10 18:35:07.768 510 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 5m 58.837s 2025-10-10 18:35:07.768 511 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@6129fc43 finish run()
node4 5m 58.838s 2025-10-10 18:35:07.769 512 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: received tree rooted at com.swirlds.virtualmap.VirtualMap with route [2]
node4 5m 58.838s 2025-10-10 18:35:07.769 513 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: synchronization complete
node4 5m 58.838s 2025-10-10 18:35:07.769 514 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls initialize()
node4 5m 58.839s 2025-10-10 18:35:07.770 515 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: initializing tree
node4 5m 58.839s 2025-10-10 18:35:07.770 516 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: initialization complete
node4 5m 58.839s 2025-10-10 18:35:07.770 517 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls hash()
node4 5m 58.839s 2025-10-10 18:35:07.770 518 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: hashing tree
node4 5m 58.840s 2025-10-10 18:35:07.771 519 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: hashing complete
node4 5m 58.840s 2025-10-10 18:35:07.771 520 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls logStatistics()
node4 5m 58.843s 2025-10-10 18:35:07.774 521 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: Finished synchronization {"timeInSeconds":0.447,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":12,"leafNodes":7,"redundantLeafNodes":4,"internalNodes":5,"redundantInternalNodes":2} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload]
node4 5m 58.843s 2025-10-10 18:35:07.774 522 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: ReconnectMapMetrics: transfersFromTeacher=12; transfersFromLearner=10; internalHashes=0; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=3; leafCleanHashes=3; leafData=7; leafCleanData=4
node4 5m 58.844s 2025-10-10 18:35:07.775 523 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner is done synchronizing
node4 5m 58.847s 2025-10-10 18:35:07.778 524 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Reconnect data usage report {"dataMegabytes":0.0060558319091796875} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload]
node4 5m 58.850s 2025-10-10 18:35:07.781 525 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":3,"round":754,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 5m 58.851s 2025-10-10 18:35:07.782 526 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Information for state received during reconnect:
Round: 754 Timestamp: 2025-10-10T18:35:05.841282390Z Next consensus number: 22916 Legacy running event hash: f0b4b6b9ba289d16f278fdc8c144c859f006302bfe46473218aa658636bedb65dc3102165f456d11a753796e657fec5a Legacy running event mnemonic: trumpet-cigar-roast-front Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -794697419 Root hash: f969fee886310b3ae0491b8ec95335d26d6bf750f70cc27dfd0d8b0b68ca2101e5dbb5066f06174229c1711fe5b6fe09 (root) ConsistencyTestingToolState / spray-cart-delay-guard 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 ostrich-busy-staff-garbage 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 2298158524733737378 /3 matrix-pink-crawl-penalty 4 StringLeaf 753 /4 where-version-civil-grape
node4 5m 58.852s 2025-10-10 18:35:07.783 528 DEBUG RECONNECT <<reconnect: reconnect-controller>> ReconnectStateLoader: `loadReconnectState` : reloading state
node4 5m 58.852s 2025-10-10 18:35:07.783 529 INFO STARTUP <<reconnect: reconnect-controller>> ConsistencyTestingToolState: State initialized with state long 2298158524733737378.
node4 5m 58.852s 2025-10-10 18:35:07.783 530 INFO STARTUP <<reconnect: reconnect-controller>> ConsistencyTestingToolState: State initialized with 753 rounds handled.
node4 5m 58.853s 2025-10-10 18:35:07.784 531 INFO STARTUP <<reconnect: reconnect-controller>> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 5m 58.853s 2025-10-10 18:35:07.784 532 INFO STARTUP <<reconnect: reconnect-controller>> TransactionHandlingHistory: Log file found. Parsing previous history
node4 5m 58.877s 2025-10-10 18:35:07.808 537 INFO STATE_TO_DISK <<reconnect: reconnect-controller>> DefaultSavedStateController: Signed state from round 754 created, will eventually be written to disk, for reason: RECONNECT
node4 5m 58.877s 2025-10-10 18:35:07.808 538 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 804.0 ms in BEHIND. Now in RECONNECT_COMPLETE
node4 5m 58.878s 2025-10-10 18:35:07.809 539 INFO STARTUP <platformForkJoinThread-3> Shadowgraph: Shadowgraph starting from expiration threshold 727
node4 5m 58.880s 2025-10-10 18:35:07.811 542 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 754 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/754
node4 5m 58.882s 2025-10-10 18:35:07.813 543 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 754
node4 5m 58.886s 2025-10-10 18:35:07.817 545 INFO EVENT_STREAM <<reconnect: reconnect-controller>> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: f0b4b6b9ba289d16f278fdc8c144c859f006302bfe46473218aa658636bedb65dc3102165f456d11a753796e657fec5a
node4 5m 58.887s 2025-10-10 18:35:07.818 546 INFO STARTUP <platformForkJoinThread-1> PcesFileManager: Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr370_orgn0.pces. All future files will have an origin round of 754.
node3 5m 58.921s 2025-10-10 18:35:07.852 8673 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> ReconnectTeacher: Finished reconnect in the role of the sender. {"receiving":false,"nodeId":3,"otherNodeId":4,"round":754,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 5m 59.038s 2025-10-10 18:35:07.969 580 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 754
node4 5m 59.041s 2025-10-10 18:35:07.972 581 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 754 Timestamp: 2025-10-10T18:35:05.841282390Z Next consensus number: 22916 Legacy running event hash: f0b4b6b9ba289d16f278fdc8c144c859f006302bfe46473218aa658636bedb65dc3102165f456d11a753796e657fec5a Legacy running event mnemonic: trumpet-cigar-roast-front Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -794697419 Root hash: f969fee886310b3ae0491b8ec95335d26d6bf750f70cc27dfd0d8b0b68ca2101e5dbb5066f06174229c1711fe5b6fe09 (root) ConsistencyTestingToolState / spray-cart-delay-guard 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 ostrich-busy-staff-garbage 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 2298158524733737378 /3 matrix-pink-crawl-penalty 4 StringLeaf 753 /4 where-version-civil-grape
node4 5m 59.077s 2025-10-10 18:35:08.008 582 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr370_orgn0.pces
node4 5m 59.078s 2025-10-10 18:35:08.009 583 WARN STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: No preconsensus event files meeting specified criteria found to copy. Lower bound: 727
node4 5m 59.083s 2025-10-10 18:35:08.014 584 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 754 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/754 {"round":754,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/754/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 5m 59.088s 2025-10-10 18:35:08.019 585 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 210.0 ms in RECONNECT_COMPLETE. Now in CHECKING
node4 5m 59.367s 2025-10-10 18:35:08.298 586 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 5m 59.370s 2025-10-10 18:35:08.301 587 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 5m 59.885s 2025-10-10 18:35:08.816 588 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:898638773c30 BR:752), num remaining: 3
node4 5m 59.886s 2025-10-10 18:35:08.817 589 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:42b329d8a70b BR:752), num remaining: 2
node4 5m 59.886s 2025-10-10 18:35:08.817 590 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:6f1a346242f1 BR:752), num remaining: 1
node4 5m 59.886s 2025-10-10 18:35:08.817 591 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:754fe9cad5af BR:752), num remaining: 0
node1 6m 2.975s 2025-10-10 18:35:11.906 8765 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=765,ancientThreshold=738,expiredThreshold=664] remote ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301]
node4 6m 3.046s 2025-10-10 18:35:11.977 716 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=369,ancientThreshold=342,expiredThreshold=301] remote ev=EventWindow[latestConsensusRound=765,ancientThreshold=738,expiredThreshold=664]
node4 6m 3.047s 2025-10-10 18:35:11.978 717 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> RpcPeerHandler: Latest event window is not really falling behind, will retry sync local ev=EventWindow[latestConsensusRound=765,ancientThreshold=738,expiredThreshold=727] remote ev=EventWindow[latestConsensusRound=765,ancientThreshold=738,expiredThreshold=664]
node4 6m 3.676s 2025-10-10 18:35:12.607 729 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 4.6 s in CHECKING. Now in ACTIVE
node3 6m 52.756s 2025-10-10 18:36:01.687 9941 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 870 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 6m 52.790s 2025-10-10 18:36:01.721 9966 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 870 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 6m 52.832s 2025-10-10 18:36:01.763 9931 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 870 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 6m 52.869s 2025-10-10 18:36:01.800 1856 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 870 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 6m 52.964s 2025-10-10 18:36:01.895 9980 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 870 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 6m 52.986s 2025-10-10 18:36:01.917 9937 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 870 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/870
node1 6m 52.987s 2025-10-10 18:36:01.918 9938 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 870
node2 6m 53.036s 2025-10-10 18:36:01.967 9986 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 870 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/870
node2 6m 53.037s 2025-10-10 18:36:01.968 9987 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 870
node1 6m 53.086s 2025-10-10 18:36:02.017 9982 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 870
node1 6m 53.090s 2025-10-10 18:36:02.021 9983 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 870 Timestamp: 2025-10-10T18:36:00.394262Z Next consensus number: 27160 Legacy running event hash: 520c7b1c665a79a138a1396b9ddaa9c21107581a3bc179664956b97a80ca1658ee8fbbafce87801ad2156e3bd4f14602 Legacy running event mnemonic: dose-glory-slender-crouch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 654905998 Root hash: 19ac5613bcd8c971c46503e81530933b51406deb15a610e3851744302d8be129cb6f37f1aed543dd4fab656eabbdf36c (root) ConsistencyTestingToolState / culture-second-govern-spin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 furnace-inhale-bring-tuna 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8199423269293998771 /3 marble-vanish-power-remove 4 StringLeaf 869 /4 property-lecture-easily-faint
node1 6m 53.097s 2025-10-10 18:36:02.028 9984 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+33+16.935675264Z_seq1_minr474_maxr5474_orgn0.pces
node1 6m 53.097s 2025-10-10 18:36:02.028 9985 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 843 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+33+16.935675264Z_seq1_minr474_maxr5474_orgn0.pces
node1 6m 53.098s 2025-10-10 18:36:02.029 9986 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 6m 53.105s 2025-10-10 18:36:02.036 9987 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 6m 53.105s 2025-10-10 18:36:02.036 9988 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 870 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/870 {"round":870,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/870/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 6m 53.107s 2025-10-10 18:36:02.038 9989 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/200
node2 6m 53.128s 2025-10-10 18:36:02.059 10031 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 870
node2 6m 53.130s 2025-10-10 18:36:02.061 10032 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 870 Timestamp: 2025-10-10T18:36:00.394262Z Next consensus number: 27160 Legacy running event hash: 520c7b1c665a79a138a1396b9ddaa9c21107581a3bc179664956b97a80ca1658ee8fbbafce87801ad2156e3bd4f14602 Legacy running event mnemonic: dose-glory-slender-crouch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 654905998 Root hash: 19ac5613bcd8c971c46503e81530933b51406deb15a610e3851744302d8be129cb6f37f1aed543dd4fab656eabbdf36c (root) ConsistencyTestingToolState / culture-second-govern-spin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 furnace-inhale-bring-tuna 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8199423269293998771 /3 marble-vanish-power-remove 4 StringLeaf 869 /4 property-lecture-easily-faint
node2 6m 53.139s 2025-10-10 18:36:02.070 10033 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+33+17.026077598Z_seq1_minr473_maxr5473_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 6m 53.140s 2025-10-10 18:36:02.071 10034 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 843 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+33+17.026077598Z_seq1_minr473_maxr5473_orgn0.pces
node2 6m 53.140s 2025-10-10 18:36:02.071 10035 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 6m 53.147s 2025-10-10 18:36:02.078 10036 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 6m 53.147s 2025-10-10 18:36:02.078 10037 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 870 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/870 {"round":870,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/870/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 53.147s 2025-10-10 18:36:02.078 1862 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 870 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/870
node4 6m 53.148s 2025-10-10 18:36:02.079 1863 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 870
node2 6m 53.149s 2025-10-10 18:36:02.080 10038 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/200
node3 6m 53.171s 2025-10-10 18:36:02.102 9947 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 870 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/870
node3 6m 53.172s 2025-10-10 18:36:02.103 9948 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 870
node0 6m 53.197s 2025-10-10 18:36:02.128 9972 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 870 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/870
node0 6m 53.198s 2025-10-10 18:36:02.129 9973 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 870
node4 6m 53.250s 2025-10-10 18:36:02.181 1910 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 870
node4 6m 53.252s 2025-10-10 18:36:02.183 1911 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 870 Timestamp: 2025-10-10T18:36:00.394262Z Next consensus number: 27160 Legacy running event hash: 520c7b1c665a79a138a1396b9ddaa9c21107581a3bc179664956b97a80ca1658ee8fbbafce87801ad2156e3bd4f14602 Legacy running event mnemonic: dose-glory-slender-crouch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 654905998 Root hash: 19ac5613bcd8c971c46503e81530933b51406deb15a610e3851744302d8be129cb6f37f1aed543dd4fab656eabbdf36c (root) ConsistencyTestingToolState / culture-second-govern-spin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 furnace-inhale-bring-tuna 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8199423269293998771 /3 marble-vanish-power-remove 4 StringLeaf 869 /4 property-lecture-easily-faint
node4 6m 53.260s 2025-10-10 18:36:02.191 1912 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+35+08.248454083Z_seq1_minr727_maxr1227_orgn754.pces Last file: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr370_orgn0.pces
node4 6m 53.261s 2025-10-10 18:36:02.192 1913 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 843 File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+35+08.248454083Z_seq1_minr727_maxr1227_orgn754.pces
node4 6m 53.261s 2025-10-10 18:36:02.192 1914 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 6m 53.265s 2025-10-10 18:36:02.196 1915 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 6m 53.266s 2025-10-10 18:36:02.197 1916 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 870 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/870 {"round":870,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/870/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 53.268s 2025-10-10 18:36:02.199 1917 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node3 6m 53.275s 2025-10-10 18:36:02.206 9987 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 870
node3 6m 53.278s 2025-10-10 18:36:02.209 9988 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 870 Timestamp: 2025-10-10T18:36:00.394262Z Next consensus number: 27160 Legacy running event hash: 520c7b1c665a79a138a1396b9ddaa9c21107581a3bc179664956b97a80ca1658ee8fbbafce87801ad2156e3bd4f14602 Legacy running event mnemonic: dose-glory-slender-crouch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 654905998 Root hash: 19ac5613bcd8c971c46503e81530933b51406deb15a610e3851744302d8be129cb6f37f1aed543dd4fab656eabbdf36c (root) ConsistencyTestingToolState / culture-second-govern-spin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 furnace-inhale-bring-tuna 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8199423269293998771 /3 marble-vanish-power-remove 4 StringLeaf 869 /4 property-lecture-easily-faint
node0 6m 53.287s 2025-10-10 18:36:02.218 10012 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 870
node3 6m 53.287s 2025-10-10 18:36:02.218 9989 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+33+17.037369231Z_seq1_minr473_maxr5473_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 6m 53.287s 2025-10-10 18:36:02.218 9990 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 843 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+33+17.037369231Z_seq1_minr473_maxr5473_orgn0.pces
node3 6m 53.288s 2025-10-10 18:36:02.219 9991 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 6m 53.289s 2025-10-10 18:36:02.220 10013 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 870 Timestamp: 2025-10-10T18:36:00.394262Z Next consensus number: 27160 Legacy running event hash: 520c7b1c665a79a138a1396b9ddaa9c21107581a3bc179664956b97a80ca1658ee8fbbafce87801ad2156e3bd4f14602 Legacy running event mnemonic: dose-glory-slender-crouch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 654905998 Root hash: 19ac5613bcd8c971c46503e81530933b51406deb15a610e3851744302d8be129cb6f37f1aed543dd4fab656eabbdf36c (root) ConsistencyTestingToolState / culture-second-govern-spin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 furnace-inhale-bring-tuna 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf -8199423269293998771 /3 marble-vanish-power-remove 4 StringLeaf 869 /4 property-lecture-easily-faint
node3 6m 53.295s 2025-10-10 18:36:02.226 9992 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 6m 53.296s 2025-10-10 18:36:02.227 9993 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 870 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/870 {"round":870,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/870/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 6m 53.297s 2025-10-10 18:36:02.228 9994 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/200
node0 6m 53.301s 2025-10-10 18:36:02.232 10014 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+33+17.050224094Z_seq1_minr473_maxr5473_orgn0.pces
node0 6m 53.302s 2025-10-10 18:36:02.233 10015 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 843 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+33+17.050224094Z_seq1_minr473_maxr5473_orgn0.pces
node0 6m 53.302s 2025-10-10 18:36:02.233 10016 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 6m 53.309s 2025-10-10 18:36:02.240 10017 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 6m 53.310s 2025-10-10 18:36:02.241 10018 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 870 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/870 {"round":870,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/870/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 6m 53.312s 2025-10-10 18:36:02.243 10019 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/200
node0 7m 52.151s 2025-10-10 18:37:01.082 11423 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1002 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 7m 52.242s 2025-10-10 18:37:01.173 11400 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1002 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 7m 52.353s 2025-10-10 18:37:01.284 11453 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1002 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 7m 52.361s 2025-10-10 18:37:01.292 3315 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1002 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 7m 52.369s 2025-10-10 18:37:01.300 11408 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1002 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 7m 52.436s 2025-10-10 18:37:01.367 11411 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1002 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1002
node1 7m 52.437s 2025-10-10 18:37:01.368 11412 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 1002
node2 7m 52.457s 2025-10-10 18:37:01.388 11456 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1002 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1002
node2 7m 52.458s 2025-10-10 18:37:01.389 11457 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 1002
node1 7m 52.529s 2025-10-10 18:37:01.460 11459 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 1002
node1 7m 52.531s 2025-10-10 18:37:01.462 11460 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1002 Timestamp: 2025-10-10T18:37:00.121053Z Next consensus number: 31971 Legacy running event hash: f32e66faa3ed37ed5ac64d31ddcc389b7aa4d24e90fab7376eaf70ca4e149f05e618fd01e329e5c6279f6d107f418bbe Legacy running event mnemonic: olympic-tent-list-world Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1520188632 Root hash: 2aadc0f0b6d69e319b3386eb7b293d1732ff44118567114940b48a0bcffd36b51ab52994600ebfc7a88463c6971482c6 (root) ConsistencyTestingToolState / fiber-tilt-census-delay 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lunch-sketch-trash-brush 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 5600511696549810557 /3 glare-skate-dish-buyer 4 StringLeaf 1001 /4 monster-firm-tool-rookie
node1 7m 52.538s 2025-10-10 18:37:01.469 11461 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+29+25.298498826Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+33+16.935675264Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 52.538s 2025-10-10 18:37:01.469 11462 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 974 File: data/saved/preconsensus-events/1/2025/10/10/2025-10-10T18+33+16.935675264Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 52.539s 2025-10-10 18:37:01.470 11463 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 7m 52.542s 2025-10-10 18:37:01.473 11504 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 1002
node2 7m 52.544s 2025-10-10 18:37:01.475 11505 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1002 Timestamp: 2025-10-10T18:37:00.121053Z Next consensus number: 31971 Legacy running event hash: f32e66faa3ed37ed5ac64d31ddcc389b7aa4d24e90fab7376eaf70ca4e149f05e618fd01e329e5c6279f6d107f418bbe Legacy running event mnemonic: olympic-tent-list-world Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1520188632 Root hash: 2aadc0f0b6d69e319b3386eb7b293d1732ff44118567114940b48a0bcffd36b51ab52994600ebfc7a88463c6971482c6 (root) ConsistencyTestingToolState / fiber-tilt-census-delay 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lunch-sketch-trash-brush 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 5600511696549810557 /3 glare-skate-dish-buyer 4 StringLeaf 1001 /4 monster-firm-tool-rookie
node1 7m 52.549s 2025-10-10 18:37:01.480 11464 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 7m 52.549s 2025-10-10 18:37:01.480 11465 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1002 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1002 {"round":1002,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1002/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 7m 52.551s 2025-10-10 18:37:01.482 11466 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/329
node2 7m 52.551s 2025-10-10 18:37:01.482 11506 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+33+17.026077598Z_seq1_minr473_maxr5473_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+29+25.332917798Z_seq0_minr1_maxr501_orgn0.pces
node2 7m 52.551s 2025-10-10 18:37:01.482 11507 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 974 File: data/saved/preconsensus-events/2/2025/10/10/2025-10-10T18+33+17.026077598Z_seq1_minr473_maxr5473_orgn0.pces
node2 7m 52.551s 2025-10-10 18:37:01.482 11508 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 7m 52.562s 2025-10-10 18:37:01.493 11509 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 7m 52.562s 2025-10-10 18:37:01.493 11510 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1002 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1002 {"round":1002,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1002/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 7m 52.564s 2025-10-10 18:37:01.495 11511 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/329
node0 7m 52.568s 2025-10-10 18:37:01.499 11426 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1002 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1002
node0 7m 52.569s 2025-10-10 18:37:01.500 11427 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 1002
node4 7m 52.585s 2025-10-10 18:37:01.516 3319 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1002 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1002
node4 7m 52.585s 2025-10-10 18:37:01.516 3320 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 1002
node3 7m 52.615s 2025-10-10 18:37:01.546 11403 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1002 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1002
node3 7m 52.616s 2025-10-10 18:37:01.547 11404 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for round 1002
node0 7m 52.656s 2025-10-10 18:37:01.587 11482 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 1002
node0 7m 52.657s 2025-10-10 18:37:01.588 11483 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1002 Timestamp: 2025-10-10T18:37:00.121053Z Next consensus number: 31971 Legacy running event hash: f32e66faa3ed37ed5ac64d31ddcc389b7aa4d24e90fab7376eaf70ca4e149f05e618fd01e329e5c6279f6d107f418bbe Legacy running event mnemonic: olympic-tent-list-world Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1520188632 Root hash: 2aadc0f0b6d69e319b3386eb7b293d1732ff44118567114940b48a0bcffd36b51ab52994600ebfc7a88463c6971482c6 (root) ConsistencyTestingToolState / fiber-tilt-census-delay 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lunch-sketch-trash-brush 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 5600511696549810557 /3 glare-skate-dish-buyer 4 StringLeaf 1001 /4 monster-firm-tool-rookie
node0 7m 52.664s 2025-10-10 18:37:01.595 11484 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+29+25.050603118Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+33+17.050224094Z_seq1_minr473_maxr5473_orgn0.pces
node0 7m 52.665s 2025-10-10 18:37:01.596 11485 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 974 File: data/saved/preconsensus-events/0/2025/10/10/2025-10-10T18+33+17.050224094Z_seq1_minr473_maxr5473_orgn0.pces
node0 7m 52.665s 2025-10-10 18:37:01.596 11486 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 7m 52.675s 2025-10-10 18:37:01.606 11487 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 7m 52.676s 2025-10-10 18:37:01.607 11488 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1002 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1002 {"round":1002,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1002/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 7m 52.677s 2025-10-10 18:37:01.608 11489 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/329
node4 7m 52.692s 2025-10-10 18:37:01.623 3359 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 1002
node4 7m 52.694s 2025-10-10 18:37:01.625 3360 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1002 Timestamp: 2025-10-10T18:37:00.121053Z Next consensus number: 31971 Legacy running event hash: f32e66faa3ed37ed5ac64d31ddcc389b7aa4d24e90fab7376eaf70ca4e149f05e618fd01e329e5c6279f6d107f418bbe Legacy running event mnemonic: olympic-tent-list-world Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1520188632 Root hash: 2aadc0f0b6d69e319b3386eb7b293d1732ff44118567114940b48a0bcffd36b51ab52994600ebfc7a88463c6971482c6 (root) ConsistencyTestingToolState / fiber-tilt-census-delay 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lunch-sketch-trash-brush 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 5600511696549810557 /3 glare-skate-dish-buyer 4 StringLeaf 1001 /4 monster-firm-tool-rookie
node4 7m 52.702s 2025-10-10 18:37:01.633 3361 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+35+08.248454083Z_seq1_minr727_maxr1227_orgn754.pces Last file: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+29+25.265589509Z_seq0_minr1_maxr370_orgn0.pces
node4 7m 52.703s 2025-10-10 18:37:01.634 3362 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 974 File: data/saved/preconsensus-events/4/2025/10/10/2025-10-10T18+35+08.248454083Z_seq1_minr727_maxr1227_orgn754.pces
node4 7m 52.703s 2025-10-10 18:37:01.634 3363 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 7m 52.710s 2025-10-10 18:37:01.641 3364 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 7m 52.710s 2025-10-10 18:37:01.641 3365 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1002 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1002 {"round":1002,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1002/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 7m 52.712s 2025-10-10 18:37:01.643 3366 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/75
node3 7m 52.714s 2025-10-10 18:37:01.645 11440 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for round 1002
node3 7m 52.716s 2025-10-10 18:37:01.647 11441 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1002 Timestamp: 2025-10-10T18:37:00.121053Z Next consensus number: 31971 Legacy running event hash: f32e66faa3ed37ed5ac64d31ddcc389b7aa4d24e90fab7376eaf70ca4e149f05e618fd01e329e5c6279f6d107f418bbe Legacy running event mnemonic: olympic-tent-list-world Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1520188632 Root hash: 2aadc0f0b6d69e319b3386eb7b293d1732ff44118567114940b48a0bcffd36b51ab52994600ebfc7a88463c6971482c6 (root) ConsistencyTestingToolState / fiber-tilt-census-delay 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lunch-sketch-trash-brush 1 SingletonNode RosterService.ROSTER_STATE /1 reopen-afraid-flower-fork 2 VirtualMap RosterService.ROSTERS /2 this-there-cement-pelican 3 StringLeaf 5600511696549810557 /3 glare-skate-dish-buyer 4 StringLeaf 1001 /4 monster-firm-tool-rookie
node3 7m 52.721s 2025-10-10 18:37:01.652 11442 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+33+17.037369231Z_seq1_minr473_maxr5473_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+29+25.336866187Z_seq0_minr1_maxr501_orgn0.pces
node3 7m 52.721s 2025-10-10 18:37:01.652 11443 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 974 File: data/saved/preconsensus-events/3/2025/10/10/2025-10-10T18+33+17.037369231Z_seq1_minr473_maxr5473_orgn0.pces
node3 7m 52.722s 2025-10-10 18:37:01.653 11444 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 7m 52.732s 2025-10-10 18:37:01.663 11445 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 7m 52.732s 2025-10-10 18:37:01.663 11446 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1002 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1002 {"round":1002,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1002/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 7m 52.734s 2025-10-10 18:37:01.665 11447 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/329