| node2 | 0.000ns | 2025-09-24 13:57:47.552 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node0 | 58.000ms | 2025-09-24 13:57:47.610 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node2 | 94.000ms | 2025-09-24 13:57:47.646 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node2 | 110.000ms | 2025-09-24 13:57:47.662 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 145.000ms | 2025-09-24 13:57:47.697 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node0 | 161.000ms | 2025-09-24 13:57:47.713 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 184.000ms | 2025-09-24 13:57:47.736 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node2 | 233.000ms | 2025-09-24 13:57:47.785 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [2] are set to run locally | |
| node2 | 239.000ms | 2025-09-24 13:57:47.791 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node2 | 252.000ms | 2025-09-24 13:57:47.804 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 273.000ms | 2025-09-24 13:57:47.825 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [0] are set to run locally | |
| node1 | 277.000ms | 2025-09-24 13:57:47.829 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node0 | 279.000ms | 2025-09-24 13:57:47.831 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node0 | 291.000ms | 2025-09-24 13:57:47.843 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node1 | 294.000ms | 2025-09-24 13:57:47.846 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 411.000ms | 2025-09-24 13:57:47.963 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [1] are set to run locally | |
| node1 | 418.000ms | 2025-09-24 13:57:47.970 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node1 | 431.000ms | 2025-09-24 13:57:47.983 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node2 | 689.000ms | 2025-09-24 13:57:48.241 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node2 | 690.000ms | 2025-09-24 13:57:48.242 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node0 | 708.000ms | 2025-09-24 13:57:48.260 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node0 | 709.000ms | 2025-09-24 13:57:48.261 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node1 | 858.000ms | 2025-09-24 13:57:48.410 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node1 | 859.000ms | 2025-09-24 13:57:48.411 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node2 | 1.555s | 2025-09-24 13:57:49.107 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 864ms | |
| node2 | 1.564s | 2025-09-24 13:57:49.116 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node2 | 1.568s | 2025-09-24 13:57:49.120 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 1.607s | 2025-09-24 13:57:49.159 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node2 | 1.669s | 2025-09-24 13:57:49.221 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node2 | 1.670s | 2025-09-24 13:57:49.222 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node0 | 1.742s | 2025-09-24 13:57:49.294 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1032ms | |
| node0 | 1.751s | 2025-09-24 13:57:49.303 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node0 | 1.755s | 2025-09-24 13:57:49.307 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 1.796s | 2025-09-24 13:57:49.348 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node0 | 1.858s | 2025-09-24 13:57:49.410 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node0 | 1.859s | 2025-09-24 13:57:49.411 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node1 | 1.957s | 2025-09-24 13:57:49.509 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1098ms | |
| node1 | 1.966s | 2025-09-24 13:57:49.518 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node1 | 1.969s | 2025-09-24 13:57:49.521 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 2.010s | 2025-09-24 13:57:49.562 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node1 | 2.075s | 2025-09-24 13:57:49.627 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node1 | 2.076s | 2025-09-24 13:57:49.628 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node3 | 2.088s | 2025-09-24 13:57:49.640 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node3 | 2.179s | 2025-09-24 13:57:49.731 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node3 | 2.196s | 2025-09-24 13:57:49.748 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 2.221s | 2025-09-24 13:57:49.773 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node3 | 2.313s | 2025-09-24 13:57:49.865 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [3] are set to run locally | |
| node4 | 2.314s | 2025-09-24 13:57:49.866 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node3 | 2.319s | 2025-09-24 13:57:49.871 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node4 | 2.330s | 2025-09-24 13:57:49.882 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node3 | 2.332s | 2025-09-24 13:57:49.884 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 2.449s | 2025-09-24 13:57:50.001 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 2.456s | 2025-09-24 13:57:50.008 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node4 | 2.468s | 2025-09-24 13:57:50.020 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node3 | 2.799s | 2025-09-24 13:57:50.351 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node3 | 2.800s | 2025-09-24 13:57:50.352 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node4 | 2.905s | 2025-09-24 13:57:50.457 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node4 | 2.906s | 2025-09-24 13:57:50.458 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node2 | 3.745s | 2025-09-24 13:57:51.297 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node2 | 3.834s | 2025-09-24 13:57:51.386 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 3.837s | 2025-09-24 13:57:51.389 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node2 | 3.838s | 2025-09-24 13:57:51.390 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 3.868s | 2025-09-24 13:57:51.420 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 3.915s | 2025-09-24 13:57:51.467 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1008ms | |
| node4 | 3.924s | 2025-09-24 13:57:51.476 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 3.928s | 2025-09-24 13:57:51.480 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 3.961s | 2025-09-24 13:57:51.513 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 3.964s | 2025-09-24 13:57:51.516 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node0 | 3.964s | 2025-09-24 13:57:51.516 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 3.972s | 2025-09-24 13:57:51.524 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node3 | 4.016s | 2025-09-24 13:57:51.568 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1214ms | |
| node3 | 4.026s | 2025-09-24 13:57:51.578 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node3 | 4.032s | 2025-09-24 13:57:51.584 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 4.047s | 2025-09-24 13:57:51.599 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 4.048s | 2025-09-24 13:57:51.600 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node3 | 4.079s | 2025-09-24 13:57:51.631 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node1 | 4.135s | 2025-09-24 13:57:51.687 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node3 | 4.163s | 2025-09-24 13:57:51.715 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node3 | 4.165s | 2025-09-24 13:57:51.717 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node1 | 4.213s | 2025-09-24 13:57:51.765 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 4.216s | 2025-09-24 13:57:51.768 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node1 | 4.216s | 2025-09-24 13:57:51.768 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node2 | 4.625s | 2025-09-24 13:57:52.177 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 4.628s | 2025-09-24 13:57:52.180 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node2 | 4.634s | 2025-09-24 13:57:52.186 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node2 | 4.646s | 2025-09-24 13:57:52.198 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 4.648s | 2025-09-24 13:57:52.200 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.736s | 2025-09-24 13:57:52.288 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.739s | 2025-09-24 13:57:52.291 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node0 | 4.744s | 2025-09-24 13:57:52.296 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node0 | 4.756s | 2025-09-24 13:57:52.308 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.758s | 2025-09-24 13:57:52.310 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 5.013s | 2025-09-24 13:57:52.565 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 5.017s | 2025-09-24 13:57:52.569 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node1 | 5.023s | 2025-09-24 13:57:52.575 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node1 | 5.036s | 2025-09-24 13:57:52.588 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 5.038s | 2025-09-24 13:57:52.590 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 5.768s | 2025-09-24 13:57:53.320 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26282665] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=265330, randomLong=-7115944119231367662, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=12750, randomLong=-4837823781939610070, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1367888, data=35, exception=null] OS Health Check Report - Complete (took 1025 ms) | |||||||||
| node2 | 5.802s | 2025-09-24 13:57:53.354 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node2 | 5.810s | 2025-09-24 13:57:53.362 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node2 | 5.815s | 2025-09-24 13:57:53.367 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node0 | 5.871s | 2025-09-24 13:57:53.423 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26326028] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=185220, randomLong=8422630885394225497, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11360, randomLong=-391322077725030509, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1264279, data=35, exception=null] OS Health Check Report - Complete (took 1021 ms) | |||||||||
| node2 | 5.899s | 2025-09-24 13:57:53.451 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAAJQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IojrZw==", "port": 30125 }, { "ipAddressV4": "CoAAIQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "kpQ01Q==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "InrW4g==", "port": 30127 }, { "ipAddressV4": "CoAAJg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAAGQ==", "port": 30128 }] }] } | |||||||||
| node0 | 5.903s | 2025-09-24 13:57:53.455 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node0 | 5.911s | 2025-09-24 13:57:53.463 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node0 | 5.917s | 2025-09-24 13:57:53.469 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node2 | 5.920s | 2025-09-24 13:57:53.472 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv | |
| node2 | 5.921s | 2025-09-24 13:57:53.473 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node2 | 5.935s | 2025-09-24 13:57:53.487 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 144bb6cc78955b192f77252793152a8db982f52dbf1f06518446e68b9ee773e33a30e20ce4f228fa10d0b63755961cc9 (root) ConsistencyTestingToolState / maximum-charge-bargain-glimpse 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already | |||||||||
| node0 | 5.999s | 2025-09-24 13:57:53.551 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAAJQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IojrZw==", "port": 30125 }, { "ipAddressV4": "CoAAIQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "kpQ01Q==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "InrW4g==", "port": 30127 }, { "ipAddressV4": "CoAAJg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAAGQ==", "port": 30128 }] }] } | |||||||||
| node0 | 6.021s | 2025-09-24 13:57:53.573 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv | |
| node0 | 6.022s | 2025-09-24 13:57:53.574 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node0 | 6.039s | 2025-09-24 13:57:53.591 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 144bb6cc78955b192f77252793152a8db982f52dbf1f06518446e68b9ee773e33a30e20ce4f228fa10d0b63755961cc9 (root) ConsistencyTestingToolState / maximum-charge-bargain-glimpse 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already | |||||||||
| node4 | 6.120s | 2025-09-24 13:57:53.672 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node2 | 6.152s | 2025-09-24 13:57:53.704 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node1 | 6.154s | 2025-09-24 13:57:53.706 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26235387] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=143970, randomLong=4014490631719349526, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=10340, randomLong=7372993617420761100, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1206260, data=35, exception=null] OS Health Check Report - Complete (took 1022 ms) | |||||||||
| node2 | 6.157s | 2025-09-24 13:57:53.709 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node2 | 6.162s | 2025-09-24 13:57:53.714 | 47 | INFO | STARTUP | <<start-node-2>> | ConsistencyTestingToolMain: | init called in Main for node 2. | |
| node2 | 6.162s | 2025-09-24 13:57:53.714 | 48 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | Starting platform 2 | |
| node2 | 6.164s | 2025-09-24 13:57:53.716 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node2 | 6.167s | 2025-09-24 13:57:53.719 | 50 | INFO | STARTUP | <<start-node-2>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node2 | 6.168s | 2025-09-24 13:57:53.720 | 51 | INFO | STARTUP | <<start-node-2>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node2 | 6.169s | 2025-09-24 13:57:53.721 | 52 | INFO | STARTUP | <<start-node-2>> | InputWireChecks: | All input wires have been bound. | |
| node2 | 6.170s | 2025-09-24 13:57:53.722 | 53 | WARN | STARTUP | <<start-node-2>> | PcesFileTracker: | No preconsensus event files available | |
| node2 | 6.171s | 2025-09-24 13:57:53.723 | 54 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node2 | 6.172s | 2025-09-24 13:57:53.724 | 55 | INFO | STARTUP | <<start-node-2>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node2 | 6.173s | 2025-09-24 13:57:53.725 | 56 | INFO | STARTUP | <<app: appMain 2>> | ConsistencyTestingToolMain: | run called in Main. | |
| node2 | 6.175s | 2025-09-24 13:57:53.727 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 184.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node2 | 6.180s | 2025-09-24 13:57:53.732 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node1 | 6.186s | 2025-09-24 13:57:53.738 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node1 | 6.195s | 2025-09-24 13:57:53.747 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node1 | 6.201s | 2025-09-24 13:57:53.753 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 6.224s | 2025-09-24 13:57:53.776 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6.227s | 2025-09-24 13:57:53.779 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node4 | 6.228s | 2025-09-24 13:57:53.780 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 6.260s | 2025-09-24 13:57:53.812 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node0 | 6.265s | 2025-09-24 13:57:53.817 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node0 | 6.271s | 2025-09-24 13:57:53.823 | 47 | INFO | STARTUP | <<start-node-0>> | ConsistencyTestingToolMain: | init called in Main for node 0. | |
| node0 | 6.271s | 2025-09-24 13:57:53.823 | 48 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | Starting platform 0 | |
| node0 | 6.272s | 2025-09-24 13:57:53.824 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node0 | 6.276s | 2025-09-24 13:57:53.828 | 50 | INFO | STARTUP | <<start-node-0>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node0 | 6.277s | 2025-09-24 13:57:53.829 | 51 | INFO | STARTUP | <<start-node-0>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node0 | 6.277s | 2025-09-24 13:57:53.829 | 52 | INFO | STARTUP | <<start-node-0>> | InputWireChecks: | All input wires have been bound. | |
| node0 | 6.279s | 2025-09-24 13:57:53.831 | 53 | WARN | STARTUP | <<start-node-0>> | PcesFileTracker: | No preconsensus event files available | |
| node0 | 6.279s | 2025-09-24 13:57:53.831 | 54 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node0 | 6.281s | 2025-09-24 13:57:53.833 | 55 | INFO | STARTUP | <<start-node-0>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node0 | 6.282s | 2025-09-24 13:57:53.834 | 56 | INFO | STARTUP | <<app: appMain 0>> | ConsistencyTestingToolMain: | run called in Main. | |
| node0 | 6.284s | 2025-09-24 13:57:53.836 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 183.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node3 | 6.286s | 2025-09-24 13:57:53.838 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node0 | 6.289s | 2025-09-24 13:57:53.841 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node1 | 6.292s | 2025-09-24 13:57:53.844 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAAJQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IojrZw==", "port": 30125 }, { "ipAddressV4": "CoAAIQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "kpQ01Q==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "InrW4g==", "port": 30127 }, { "ipAddressV4": "CoAAJg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAAGQ==", "port": 30128 }] }] } | |||||||||
| node1 | 6.315s | 2025-09-24 13:57:53.867 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv | |
| node1 | 6.316s | 2025-09-24 13:57:53.868 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node1 | 6.331s | 2025-09-24 13:57:53.883 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 144bb6cc78955b192f77252793152a8db982f52dbf1f06518446e68b9ee773e33a30e20ce4f228fa10d0b63755961cc9 (root) ConsistencyTestingToolState / maximum-charge-bargain-glimpse 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already | |||||||||
| node3 | 6.371s | 2025-09-24 13:57:53.923 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 6.374s | 2025-09-24 13:57:53.926 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node3 | 6.377s | 2025-09-24 13:57:53.929 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node1 | 6.560s | 2025-09-24 13:57:54.112 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node1 | 6.565s | 2025-09-24 13:57:54.117 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node1 | 6.570s | 2025-09-24 13:57:54.122 | 47 | INFO | STARTUP | <<start-node-1>> | ConsistencyTestingToolMain: | init called in Main for node 1. | |
| node1 | 6.571s | 2025-09-24 13:57:54.123 | 48 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | Starting platform 1 | |
| node1 | 6.572s | 2025-09-24 13:57:54.124 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node1 | 6.576s | 2025-09-24 13:57:54.128 | 50 | INFO | STARTUP | <<start-node-1>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node1 | 6.577s | 2025-09-24 13:57:54.129 | 51 | INFO | STARTUP | <<start-node-1>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node1 | 6.577s | 2025-09-24 13:57:54.129 | 52 | INFO | STARTUP | <<start-node-1>> | InputWireChecks: | All input wires have been bound. | |
| node1 | 6.579s | 2025-09-24 13:57:54.131 | 53 | WARN | STARTUP | <<start-node-1>> | PcesFileTracker: | No preconsensus event files available | |
| node1 | 6.579s | 2025-09-24 13:57:54.131 | 54 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node1 | 6.581s | 2025-09-24 13:57:54.133 | 55 | INFO | STARTUP | <<start-node-1>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node1 | 6.582s | 2025-09-24 13:57:54.134 | 56 | INFO | STARTUP | <<app: appMain 1>> | ConsistencyTestingToolMain: | run called in Main. | |
| node1 | 6.583s | 2025-09-24 13:57:54.135 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 193.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node1 | 6.588s | 2025-09-24 13:57:54.140 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node4 | 7.131s | 2025-09-24 13:57:54.683 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 7.135s | 2025-09-24 13:57:54.687 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 7.142s | 2025-09-24 13:57:54.694 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node4 | 7.154s | 2025-09-24 13:57:54.706 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 7.157s | 2025-09-24 13:57:54.709 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 7.200s | 2025-09-24 13:57:54.752 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 7.204s | 2025-09-24 13:57:54.756 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node3 | 7.211s | 2025-09-24 13:57:54.763 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node3 | 7.222s | 2025-09-24 13:57:54.774 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 7.224s | 2025-09-24 13:57:54.776 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 8.285s | 2025-09-24 13:57:55.837 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26152559] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=139650, randomLong=3720444922136937655, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=15109, randomLong=5687449677657576430, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1237530, data=35, exception=null] OS Health Check Report - Complete (took 1025 ms) | |||||||||
| node4 | 8.319s | 2025-09-24 13:57:55.871 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 8.329s | 2025-09-24 13:57:55.881 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 8.335s | 2025-09-24 13:57:55.887 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node3 | 8.358s | 2025-09-24 13:57:55.910 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26194742] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=212920, randomLong=4243337680759881256, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=9060, randomLong=-7700362675037949631, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1414459, data=35, exception=null] OS Health Check Report - Complete (took 1027 ms) | |||||||||
| node3 | 8.398s | 2025-09-24 13:57:55.950 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node3 | 8.407s | 2025-09-24 13:57:55.959 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node3 | 8.413s | 2025-09-24 13:57:55.965 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 8.424s | 2025-09-24 13:57:55.976 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAAJQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IojrZw==", "port": 30125 }, { "ipAddressV4": "CoAAIQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "kpQ01Q==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "InrW4g==", "port": 30127 }, { "ipAddressV4": "CoAAJg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAAGQ==", "port": 30128 }] }] } | |||||||||
| node4 | 8.445s | 2025-09-24 13:57:55.997 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 8.445s | 2025-09-24 13:57:55.997 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node4 | 8.459s | 2025-09-24 13:57:56.011 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 144bb6cc78955b192f77252793152a8db982f52dbf1f06518446e68b9ee773e33a30e20ce4f228fa10d0b63755961cc9 (root) ConsistencyTestingToolState / maximum-charge-bargain-glimpse 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already | |||||||||
| node3 | 8.504s | 2025-09-24 13:57:56.056 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAAJQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IojrZw==", "port": 30125 }, { "ipAddressV4": "CoAAIQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "kpQ01Q==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "InrW4g==", "port": 30127 }, { "ipAddressV4": "CoAAJg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAAGQ==", "port": 30128 }] }] } | |||||||||
| node3 | 8.528s | 2025-09-24 13:57:56.080 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv | |
| node3 | 8.528s | 2025-09-24 13:57:56.080 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node3 | 8.544s | 2025-09-24 13:57:56.096 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 144bb6cc78955b192f77252793152a8db982f52dbf1f06518446e68b9ee773e33a30e20ce4f228fa10d0b63755961cc9 (root) ConsistencyTestingToolState / maximum-charge-bargain-glimpse 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already | |||||||||
| node4 | 8.665s | 2025-09-24 13:57:56.217 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node4 | 8.671s | 2025-09-24 13:57:56.223 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node4 | 8.677s | 2025-09-24 13:57:56.229 | 47 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 8.678s | 2025-09-24 13:57:56.230 | 48 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 8.679s | 2025-09-24 13:57:56.231 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 8.683s | 2025-09-24 13:57:56.235 | 50 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 8.684s | 2025-09-24 13:57:56.236 | 51 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 8.685s | 2025-09-24 13:57:56.237 | 52 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 8.686s | 2025-09-24 13:57:56.238 | 53 | WARN | STARTUP | <<start-node-4>> | PcesFileTracker: | No preconsensus event files available | |
| node4 | 8.687s | 2025-09-24 13:57:56.239 | 54 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node4 | 8.688s | 2025-09-24 13:57:56.240 | 55 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node4 | 8.689s | 2025-09-24 13:57:56.241 | 56 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 8.690s | 2025-09-24 13:57:56.242 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 166.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 8.695s | 2025-09-24 13:57:56.247 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node3 | 8.765s | 2025-09-24 13:57:56.317 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node3 | 8.771s | 2025-09-24 13:57:56.323 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node3 | 8.776s | 2025-09-24 13:57:56.328 | 47 | INFO | STARTUP | <<start-node-3>> | ConsistencyTestingToolMain: | init called in Main for node 3. | |
| node3 | 8.776s | 2025-09-24 13:57:56.328 | 48 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | Starting platform 3 | |
| node3 | 8.777s | 2025-09-24 13:57:56.329 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node3 | 8.780s | 2025-09-24 13:57:56.332 | 50 | INFO | STARTUP | <<start-node-3>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node3 | 8.781s | 2025-09-24 13:57:56.333 | 51 | INFO | STARTUP | <<start-node-3>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node3 | 8.782s | 2025-09-24 13:57:56.334 | 52 | INFO | STARTUP | <<start-node-3>> | InputWireChecks: | All input wires have been bound. | |
| node3 | 8.784s | 2025-09-24 13:57:56.336 | 53 | WARN | STARTUP | <<start-node-3>> | PcesFileTracker: | No preconsensus event files available | |
| node3 | 8.784s | 2025-09-24 13:57:56.336 | 54 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node3 | 8.786s | 2025-09-24 13:57:56.338 | 55 | INFO | STARTUP | <<start-node-3>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node3 | 8.787s | 2025-09-24 13:57:56.339 | 56 | INFO | STARTUP | <<app: appMain 3>> | ConsistencyTestingToolMain: | run called in Main. | |
| node3 | 8.789s | 2025-09-24 13:57:56.341 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 182.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node3 | 8.794s | 2025-09-24 13:57:56.346 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node2 | 9.173s | 2025-09-24 13:57:56.725 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ] | |
| node2 | 9.175s | 2025-09-24 13:57:56.727 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node0 | 9.282s | 2025-09-24 13:57:56.834 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ] | |
| node0 | 9.288s | 2025-09-24 13:57:56.840 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node1 | 9.584s | 2025-09-24 13:57:57.136 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ] | |
| node1 | 9.587s | 2025-09-24 13:57:57.139 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 11.690s | 2025-09-24 13:57:59.242 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 11.693s | 2025-09-24 13:57:59.245 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node3 | 11.788s | 2025-09-24 13:57:59.340 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ] | |
| node3 | 11.791s | 2025-09-24 13:57:59.343 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node2 | 16.269s | 2025-09-24 13:58:03.821 | 61 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 16.380s | 2025-09-24 13:58:03.932 | 61 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node1 | 16.679s | 2025-09-24 13:58:04.231 | 61 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 18.272s | 2025-09-24 13:58:05.824 | 62 | INFO | STARTUP | <<scheduler TransactionHandler>> | DefaultTransactionHandler: | Ignoring empty consensus round 1 | |
| node1 | 18.341s | 2025-09-24 13:58:05.893 | 62 | INFO | STARTUP | <<scheduler TransactionHandler>> | DefaultTransactionHandler: | Ignoring empty consensus round 1 | |
| node3 | 18.367s | 2025-09-24 13:58:05.919 | 61 | INFO | STARTUP | <<scheduler TransactionHandler>> | DefaultTransactionHandler: | Ignoring empty consensus round 1 | |
| node4 | 18.428s | 2025-09-24 13:58:05.980 | 61 | INFO | STARTUP | <<scheduler TransactionHandler>> | DefaultTransactionHandler: | Ignoring empty consensus round 1 | |
| node2 | 18.482s | 2025-09-24 13:58:06.034 | 62 | INFO | STARTUP | <<scheduler TransactionHandler>> | DefaultTransactionHandler: | Ignoring empty consensus round 1 | |
| node4 | 18.786s | 2025-09-24 13:58:06.338 | 62 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node3 | 18.883s | 2025-09-24 13:58:06.435 | 62 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 19.259s | 2025-09-24 13:58:06.811 | 63 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 2.9 s in CHECKING. Now in ACTIVE | |
| node1 | 19.260s | 2025-09-24 13:58:06.812 | 63 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 2.6 s in CHECKING. Now in ACTIVE | |
| node0 | 19.262s | 2025-09-24 13:58:06.814 | 65 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node1 | 19.263s | 2025-09-24 13:58:06.815 | 65 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node4 | 19.274s | 2025-09-24 13:58:06.826 | 64 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node3 | 19.309s | 2025-09-24 13:58:06.861 | 64 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node2 | 19.344s | 2025-09-24 13:58:06.896 | 63 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | DefaultStatusStateMachine: | Platform spent 3.1 s in CHECKING. Now in ACTIVE | |
| node2 | 19.348s | 2025-09-24 13:58:06.900 | 65 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node2 | 19.544s | 2025-09-24 13:58:07.096 | 80 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2 | |
| node2 | 19.546s | 2025-09-24 13:58:07.098 | 81 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node1 | 19.640s | 2025-09-24 13:58:07.192 | 80 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2 | |
| node1 | 19.642s | 2025-09-24 13:58:07.194 | 81 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node0 | 19.666s | 2025-09-24 13:58:07.218 | 80 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2 | |
| node0 | 19.668s | 2025-09-24 13:58:07.220 | 81 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node4 | 19.679s | 2025-09-24 13:58:07.231 | 79 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2 | |
| node4 | 19.681s | 2025-09-24 13:58:07.233 | 80 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node3 | 19.711s | 2025-09-24 13:58:07.263 | 79 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2 | |
| node3 | 19.713s | 2025-09-24 13:58:07.265 | 80 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node2 | 19.796s | 2025-09-24 13:58:07.348 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node2 | 19.799s | 2025-09-24 13:58:07.351 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 2 Timestamp: 2025-09-24T13:58:04.890489Z Next consensus number: 10 Legacy running event hash: c0dc649b48af1e080410987da884e7024b8ee3b9e0fe29185e3820154f84aeac0574a7776d38863b51de97bee56ef18b Legacy running event mnemonic: tonight-nature-bicycle-scissors Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: fba04f565f86cb2f8e234111af316fac220821b69c788f6e4cc4fe80127ee3fe6ee20328d8fc2c02b41fe7637c5491fd (root) ConsistencyTestingToolState / metal-turn-valid-olive 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hand-danger-trick-escape 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node2 | 19.842s | 2025-09-24 13:58:07.394 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 19.843s | 2025-09-24 13:58:07.395 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 19.843s | 2025-09-24 13:58:07.395 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 19.845s | 2025-09-24 13:58:07.397 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 19.851s | 2025-09-24 13:58:07.403 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 19.888s | 2025-09-24 13:58:07.440 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node1 | 19.891s | 2025-09-24 13:58:07.443 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 2 Timestamp: 2025-09-24T13:58:04.890489Z Next consensus number: 10 Legacy running event hash: c0dc649b48af1e080410987da884e7024b8ee3b9e0fe29185e3820154f84aeac0574a7776d38863b51de97bee56ef18b Legacy running event mnemonic: tonight-nature-bicycle-scissors Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: fba04f565f86cb2f8e234111af316fac220821b69c788f6e4cc4fe80127ee3fe6ee20328d8fc2c02b41fe7637c5491fd (root) ConsistencyTestingToolState / metal-turn-valid-olive 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hand-danger-trick-escape 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node0 | 19.909s | 2025-09-24 13:58:07.461 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node0 | 19.912s | 2025-09-24 13:58:07.464 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 2 Timestamp: 2025-09-24T13:58:04.890489Z Next consensus number: 10 Legacy running event hash: c0dc649b48af1e080410987da884e7024b8ee3b9e0fe29185e3820154f84aeac0574a7776d38863b51de97bee56ef18b Legacy running event mnemonic: tonight-nature-bicycle-scissors Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: fba04f565f86cb2f8e234111af316fac220821b69c788f6e4cc4fe80127ee3fe6ee20328d8fc2c02b41fe7637c5491fd (root) ConsistencyTestingToolState / metal-turn-valid-olive 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hand-danger-trick-escape 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node1 | 19.925s | 2025-09-24 13:58:07.477 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 19.925s | 2025-09-24 13:58:07.477 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 19.925s | 2025-09-24 13:58:07.477 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 19.926s | 2025-09-24 13:58:07.478 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 19.932s | 2025-09-24 13:58:07.484 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 19.942s | 2025-09-24 13:58:07.494 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 19.943s | 2025-09-24 13:58:07.495 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 19.943s | 2025-09-24 13:58:07.495 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 19.944s | 2025-09-24 13:58:07.496 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 19.950s | 2025-09-24 13:58:07.502 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 19.957s | 2025-09-24 13:58:07.509 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node4 | 19.961s | 2025-09-24 13:58:07.513 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 2 Timestamp: 2025-09-24T13:58:04.890489Z Next consensus number: 10 Legacy running event hash: c0dc649b48af1e080410987da884e7024b8ee3b9e0fe29185e3820154f84aeac0574a7776d38863b51de97bee56ef18b Legacy running event mnemonic: tonight-nature-bicycle-scissors Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: fba04f565f86cb2f8e234111af316fac220821b69c788f6e4cc4fe80127ee3fe6ee20328d8fc2c02b41fe7637c5491fd (root) ConsistencyTestingToolState / metal-turn-valid-olive 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hand-danger-trick-escape 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node3 | 20.000s | 2025-09-24 13:58:07.552 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2 | |
| node4 | 20.002s | 2025-09-24 13:58:07.554 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 20.002s | 2025-09-24 13:58:07.554 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 20.003s | 2025-09-24 13:58:07.555 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 2 Timestamp: 2025-09-24T13:58:04.890489Z Next consensus number: 10 Legacy running event hash: c0dc649b48af1e080410987da884e7024b8ee3b9e0fe29185e3820154f84aeac0574a7776d38863b51de97bee56ef18b Legacy running event mnemonic: tonight-nature-bicycle-scissors Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: fba04f565f86cb2f8e234111af316fac220821b69c788f6e4cc4fe80127ee3fe6ee20328d8fc2c02b41fe7637c5491fd (root) ConsistencyTestingToolState / metal-turn-valid-olive 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 hand-danger-trick-escape 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 8898299034380133366 /3 nephew-deny-real-blanket 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node4 | 20.003s | 2025-09-24 13:58:07.555 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 20.004s | 2025-09-24 13:58:07.556 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 20.010s | 2025-09-24 13:58:07.562 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 20.041s | 2025-09-24 13:58:07.593 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 20.041s | 2025-09-24 13:58:07.593 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 20.042s | 2025-09-24 13:58:07.594 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 20.043s | 2025-09-24 13:58:07.595 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 20.049s | 2025-09-24 13:58:07.601 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 20.690s | 2025-09-24 13:58:08.242 | 120 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 1.9 s in CHECKING. Now in ACTIVE | |
| node3 | 21.202s | 2025-09-24 13:58:08.754 | 121 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 2.3 s in CHECKING. Now in ACTIVE | |
| node2 | 1m 13.979s | 2025-09-24 13:59:01.531 | 1067 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 90 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 1m 14.116s | 2025-09-24 13:59:01.668 | 1073 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 90 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 1m 14.186s | 2025-09-24 13:59:01.738 | 1058 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 90 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 1m 14.192s | 2025-09-24 13:59:01.744 | 1065 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 90 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 1m 14.260s | 2025-09-24 13:59:01.812 | 1077 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 90 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 1m 14.437s | 2025-09-24 13:59:01.989 | 1070 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 90 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/90 | |
| node2 | 1m 14.439s | 2025-09-24 13:59:01.991 | 1071 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node2 | 1m 14.526s | 2025-09-24 13:59:02.078 | 1104 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node3 | 1m 14.526s | 2025-09-24 13:59:02.078 | 1080 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 90 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/90 | |
| node3 | 1m 14.526s | 2025-09-24 13:59:02.078 | 1081 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node0 | 1m 14.528s | 2025-09-24 13:59:02.080 | 1076 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 90 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/90 | |
| node0 | 1m 14.529s | 2025-09-24 13:59:02.081 | 1077 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node2 | 1m 14.530s | 2025-09-24 13:59:02.082 | 1105 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 90 Timestamp: 2025-09-24T13:59:00.197361Z Next consensus number: 2278 Legacy running event hash: 39fba5fe6c00ebc7e9d386136d294558fd16322f7c1079a5e6a1820a610b4764a8b18733a4f527b9dacf46bd7951613e Legacy running event mnemonic: disorder-doctor-length-mango Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1872164731 Root hash: 93efd9b6393342b3066d10e6d6b8cf0a5bce421d439ba338f385094dec609503772beba9b0f6564e9bb370c3eaae688c (root) ConsistencyTestingToolState / lion-sing-chapter-filter 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 mention-season-wave-federal 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7659219589626362389 /3 stool-enroll-stable-client 4 StringLeaf 89 /4 pull-inner-copper-invest | |||||||||
| node4 | 1m 14.530s | 2025-09-24 13:59:02.082 | 1068 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 90 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/90 | |
| node4 | 1m 14.531s | 2025-09-24 13:59:02.083 | 1069 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node2 | 1m 14.542s | 2025-09-24 13:59:02.094 | 1106 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 14.542s | 2025-09-24 13:59:02.094 | 1107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 63 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 14.542s | 2025-09-24 13:59:02.094 | 1108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 1m 14.544s | 2025-09-24 13:59:02.096 | 1109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 1m 14.545s | 2025-09-24 13:59:02.097 | 1110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 90 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/90 {"round":90,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/90/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 1m 14.596s | 2025-09-24 13:59:02.148 | 1062 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 90 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/90 | |
| node1 | 1m 14.597s | 2025-09-24 13:59:02.149 | 1063 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node0 | 1m 14.611s | 2025-09-24 13:59:02.163 | 1110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node0 | 1m 14.614s | 2025-09-24 13:59:02.166 | 1111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 90 Timestamp: 2025-09-24T13:59:00.197361Z Next consensus number: 2278 Legacy running event hash: 39fba5fe6c00ebc7e9d386136d294558fd16322f7c1079a5e6a1820a610b4764a8b18733a4f527b9dacf46bd7951613e Legacy running event mnemonic: disorder-doctor-length-mango Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1872164731 Root hash: 93efd9b6393342b3066d10e6d6b8cf0a5bce421d439ba338f385094dec609503772beba9b0f6564e9bb370c3eaae688c (root) ConsistencyTestingToolState / lion-sing-chapter-filter 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 mention-season-wave-federal 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7659219589626362389 /3 stool-enroll-stable-client 4 StringLeaf 89 /4 pull-inner-copper-invest | |||||||||
| node3 | 1m 14.615s | 2025-09-24 13:59:02.167 | 1122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node4 | 1m 14.615s | 2025-09-24 13:59:02.167 | 1102 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node3 | 1m 14.618s | 2025-09-24 13:59:02.170 | 1123 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 90 Timestamp: 2025-09-24T13:59:00.197361Z Next consensus number: 2278 Legacy running event hash: 39fba5fe6c00ebc7e9d386136d294558fd16322f7c1079a5e6a1820a610b4764a8b18733a4f527b9dacf46bd7951613e Legacy running event mnemonic: disorder-doctor-length-mango Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1872164731 Root hash: 93efd9b6393342b3066d10e6d6b8cf0a5bce421d439ba338f385094dec609503772beba9b0f6564e9bb370c3eaae688c (root) ConsistencyTestingToolState / lion-sing-chapter-filter 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 mention-season-wave-federal 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7659219589626362389 /3 stool-enroll-stable-client 4 StringLeaf 89 /4 pull-inner-copper-invest | |||||||||
| node4 | 1m 14.618s | 2025-09-24 13:59:02.170 | 1103 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 90 Timestamp: 2025-09-24T13:59:00.197361Z Next consensus number: 2278 Legacy running event hash: 39fba5fe6c00ebc7e9d386136d294558fd16322f7c1079a5e6a1820a610b4764a8b18733a4f527b9dacf46bd7951613e Legacy running event mnemonic: disorder-doctor-length-mango Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1872164731 Root hash: 93efd9b6393342b3066d10e6d6b8cf0a5bce421d439ba338f385094dec609503772beba9b0f6564e9bb370c3eaae688c (root) ConsistencyTestingToolState / lion-sing-chapter-filter 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 mention-season-wave-federal 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7659219589626362389 /3 stool-enroll-stable-client 4 StringLeaf 89 /4 pull-inner-copper-invest | |||||||||
| node0 | 1m 14.622s | 2025-09-24 13:59:02.174 | 1112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 14.622s | 2025-09-24 13:59:02.174 | 1113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 63 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 14.623s | 2025-09-24 13:59:02.175 | 1114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 1m 14.624s | 2025-09-24 13:59:02.176 | 1115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 1m 14.625s | 2025-09-24 13:59:02.177 | 1116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 90 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/90 {"round":90,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/90/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 1m 14.626s | 2025-09-24 13:59:02.178 | 1124 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 14.626s | 2025-09-24 13:59:02.178 | 1125 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 63 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 14.626s | 2025-09-24 13:59:02.178 | 1126 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 1m 14.627s | 2025-09-24 13:59:02.179 | 1104 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 14.628s | 2025-09-24 13:59:02.180 | 1127 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 1m 14.628s | 2025-09-24 13:59:02.180 | 1105 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 63 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 14.628s | 2025-09-24 13:59:02.180 | 1106 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 1m 14.629s | 2025-09-24 13:59:02.181 | 1128 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 90 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/90 {"round":90,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/90/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 1m 14.630s | 2025-09-24 13:59:02.182 | 1107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 1m 14.630s | 2025-09-24 13:59:02.182 | 1108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 90 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/90 {"round":90,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/90/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 1m 14.679s | 2025-09-24 13:59:02.231 | 1096 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 90 | |
| node1 | 1m 14.682s | 2025-09-24 13:59:02.234 | 1097 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 90 Timestamp: 2025-09-24T13:59:00.197361Z Next consensus number: 2278 Legacy running event hash: 39fba5fe6c00ebc7e9d386136d294558fd16322f7c1079a5e6a1820a610b4764a8b18733a4f527b9dacf46bd7951613e Legacy running event mnemonic: disorder-doctor-length-mango Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1872164731 Root hash: 93efd9b6393342b3066d10e6d6b8cf0a5bce421d439ba338f385094dec609503772beba9b0f6564e9bb370c3eaae688c (root) ConsistencyTestingToolState / lion-sing-chapter-filter 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 mention-season-wave-federal 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7659219589626362389 /3 stool-enroll-stable-client 4 StringLeaf 89 /4 pull-inner-copper-invest | |||||||||
| node1 | 1m 14.691s | 2025-09-24 13:59:02.243 | 1098 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 14.691s | 2025-09-24 13:59:02.243 | 1099 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 63 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 14.691s | 2025-09-24 13:59:02.243 | 1100 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 1m 14.693s | 2025-09-24 13:59:02.245 | 1101 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 1m 14.693s | 2025-09-24 13:59:02.245 | 1102 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 90 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/90 {"round":90,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/90/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 2m 14.166s | 2025-09-24 14:00:01.718 | 2118 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 179 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 2m 14.323s | 2025-09-24 14:00:01.875 | 2112 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 179 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 2m 14.389s | 2025-09-24 14:00:01.941 | 2116 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 179 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 2m 14.436s | 2025-09-24 14:00:01.988 | 2136 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 179 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 2m 14.455s | 2025-09-24 14:00:02.007 | 2118 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 179 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 2m 14.643s | 2025-09-24 14:00:02.195 | 2139 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 179 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/179 | |
| node0 | 2m 14.644s | 2025-09-24 14:00:02.196 | 2140 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node4 | 2m 14.698s | 2025-09-24 14:00:02.250 | 2115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 179 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/179 | |
| node4 | 2m 14.699s | 2025-09-24 14:00:02.251 | 2116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node0 | 2m 14.735s | 2025-09-24 14:00:02.287 | 2175 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node0 | 2m 14.737s | 2025-09-24 14:00:02.289 | 2176 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 179 Timestamp: 2025-09-24T14:00:00.458113Z Next consensus number: 4713 Legacy running event hash: 76aae304ece1b3109811599af11e7c358e85acc5b06f08bd650dcf3d7aec1b6832c4854283c2454607560f60cd629982 Legacy running event mnemonic: stem-panel-glue-own Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2055293995 Root hash: ae4b2b6ef622f8c11e25ffaa1ff65b0494382d417948be188a61bc81fe5f8dc653232a3e553c98a499eea207a908c37b (root) ConsistencyTestingToolState / cute-horse-design-tree 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 path-tent-grace-elevator 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7335622976975747653 /3 panel-trend-zebra-danger 4 StringLeaf 178 /4 mirror-repeat-behave-void | |||||||||
| node0 | 2m 14.744s | 2025-09-24 14:00:02.296 | 2177 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 2m 14.744s | 2025-09-24 14:00:02.296 | 2178 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 152 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 2m 14.744s | 2025-09-24 14:00:02.296 | 2179 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 2m 14.748s | 2025-09-24 14:00:02.300 | 2180 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 2m 14.748s | 2025-09-24 14:00:02.300 | 2181 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 179 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/179 {"round":179,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/179/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 2m 14.794s | 2025-09-24 14:00:02.346 | 2147 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node4 | 2m 14.797s | 2025-09-24 14:00:02.349 | 2148 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 179 Timestamp: 2025-09-24T14:00:00.458113Z Next consensus number: 4713 Legacy running event hash: 76aae304ece1b3109811599af11e7c358e85acc5b06f08bd650dcf3d7aec1b6832c4854283c2454607560f60cd629982 Legacy running event mnemonic: stem-panel-glue-own Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2055293995 Root hash: ae4b2b6ef622f8c11e25ffaa1ff65b0494382d417948be188a61bc81fe5f8dc653232a3e553c98a499eea207a908c37b (root) ConsistencyTestingToolState / cute-horse-design-tree 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 path-tent-grace-elevator 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7335622976975747653 /3 panel-trend-zebra-danger 4 StringLeaf 178 /4 mirror-repeat-behave-void | |||||||||
| node3 | 2m 14.798s | 2025-09-24 14:00:02.350 | 2119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 179 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/179 | |
| node3 | 2m 14.799s | 2025-09-24 14:00:02.351 | 2120 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node4 | 2m 14.804s | 2025-09-24 14:00:02.356 | 2149 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 14.805s | 2025-09-24 14:00:02.357 | 2150 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 152 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 14.805s | 2025-09-24 14:00:02.357 | 2151 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 2m 14.808s | 2025-09-24 14:00:02.360 | 2152 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 2m 14.809s | 2025-09-24 14:00:02.361 | 2153 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 179 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/179 {"round":179,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/179/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 2m 14.869s | 2025-09-24 14:00:02.421 | 2121 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 179 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/179 | |
| node1 | 2m 14.869s | 2025-09-24 14:00:02.421 | 2122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node3 | 2m 14.894s | 2025-09-24 14:00:02.446 | 2151 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node3 | 2m 14.896s | 2025-09-24 14:00:02.448 | 2152 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 179 Timestamp: 2025-09-24T14:00:00.458113Z Next consensus number: 4713 Legacy running event hash: 76aae304ece1b3109811599af11e7c358e85acc5b06f08bd650dcf3d7aec1b6832c4854283c2454607560f60cd629982 Legacy running event mnemonic: stem-panel-glue-own Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2055293995 Root hash: ae4b2b6ef622f8c11e25ffaa1ff65b0494382d417948be188a61bc81fe5f8dc653232a3e553c98a499eea207a908c37b (root) ConsistencyTestingToolState / cute-horse-design-tree 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 path-tent-grace-elevator 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7335622976975747653 /3 panel-trend-zebra-danger 4 StringLeaf 178 /4 mirror-repeat-behave-void | |||||||||
| node3 | 2m 14.903s | 2025-09-24 14:00:02.455 | 2153 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 14.904s | 2025-09-24 14:00:02.456 | 2154 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 152 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 14.904s | 2025-09-24 14:00:02.456 | 2155 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 2m 14.907s | 2025-09-24 14:00:02.459 | 2156 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 2m 14.908s | 2025-09-24 14:00:02.460 | 2157 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 179 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/179 {"round":179,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/179/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 2m 14.960s | 2025-09-24 14:00:02.512 | 2161 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node1 | 2m 14.962s | 2025-09-24 14:00:02.514 | 2162 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 179 Timestamp: 2025-09-24T14:00:00.458113Z Next consensus number: 4713 Legacy running event hash: 76aae304ece1b3109811599af11e7c358e85acc5b06f08bd650dcf3d7aec1b6832c4854283c2454607560f60cd629982 Legacy running event mnemonic: stem-panel-glue-own Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2055293995 Root hash: ae4b2b6ef622f8c11e25ffaa1ff65b0494382d417948be188a61bc81fe5f8dc653232a3e553c98a499eea207a908c37b (root) ConsistencyTestingToolState / cute-horse-design-tree 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 path-tent-grace-elevator 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7335622976975747653 /3 panel-trend-zebra-danger 4 StringLeaf 178 /4 mirror-repeat-behave-void | |||||||||
| node1 | 2m 14.970s | 2025-09-24 14:00:02.522 | 2163 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 14.970s | 2025-09-24 14:00:02.522 | 2164 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 152 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 14.971s | 2025-09-24 14:00:02.523 | 2165 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 2m 14.974s | 2025-09-24 14:00:02.526 | 2166 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 2m 14.975s | 2025-09-24 14:00:02.527 | 2167 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 179 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/179 {"round":179,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/179/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 2m 15.027s | 2025-09-24 14:00:02.579 | 2131 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 179 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/179 | |
| node2 | 2m 15.028s | 2025-09-24 14:00:02.580 | 2132 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node2 | 2m 15.127s | 2025-09-24 14:00:02.679 | 2171 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 179 | |
| node2 | 2m 15.130s | 2025-09-24 14:00:02.682 | 2172 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 179 Timestamp: 2025-09-24T14:00:00.458113Z Next consensus number: 4713 Legacy running event hash: 76aae304ece1b3109811599af11e7c358e85acc5b06f08bd650dcf3d7aec1b6832c4854283c2454607560f60cd629982 Legacy running event mnemonic: stem-panel-glue-own Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2055293995 Root hash: ae4b2b6ef622f8c11e25ffaa1ff65b0494382d417948be188a61bc81fe5f8dc653232a3e553c98a499eea207a908c37b (root) ConsistencyTestingToolState / cute-horse-design-tree 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 path-tent-grace-elevator 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7335622976975747653 /3 panel-trend-zebra-danger 4 StringLeaf 178 /4 mirror-repeat-behave-void | |||||||||
| node2 | 2m 15.139s | 2025-09-24 14:00:02.691 | 2173 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 15.139s | 2025-09-24 14:00:02.691 | 2174 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 152 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 15.139s | 2025-09-24 14:00:02.691 | 2175 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 2m 15.143s | 2025-09-24 14:00:02.695 | 2176 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 2m 15.144s | 2025-09-24 14:00:02.696 | 2177 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 179 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/179 {"round":179,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/179/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 3m 12.879s | 2025-09-24 14:01:00.431 | 3239 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 0 to 4>> | NetworkUtils: | Connection broken: 0 -> 4 | |
| java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.FilterInputStream.read(FilterInputStream.java:71) at org.hiero.base.io.streams.AugmentedDataInputStream.read(AugmentedDataInputStream.java:57) at com.swirlds.platform.network.communication.states.SentKeepalive.transition(SentKeepalive.java:44) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) | |||||||||
| node3 | 3m 12.879s | 2025-09-24 14:01:00.431 | 3230 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 3 to 4>> | NetworkUtils: | Connection broken: 3 -> 4 | |
| java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.heartbeats.HeartbeatPeerProtocol.initiateHeartbeat(HeartbeatPeerProtocol.java:112) at com.swirlds.platform.heartbeats.HeartbeatPeerProtocol.runProtocol(HeartbeatPeerProtocol.java:156) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) | |||||||||
| node1 | 3m 12.880s | 2025-09-24 14:01:00.432 | 3209 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 1 to 4>> | NetworkUtils: | Connection broken: 1 -> 4 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:01:00.430478879Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:01:00.430478879Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:241) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:201) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 12 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$sendEventsTheyNeed$8(SyncUtils.java:234) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:278) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 12 more | |||||||||
| node2 | 3m 12.880s | 2025-09-24 14:01:00.432 | 3238 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 2 to 4>> | NetworkUtils: | Connection broken: 2 -> 4 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:01:00.430366693Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:01:00.430366693Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:241) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:201) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 12 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$sendEventsTheyNeed$8(SyncUtils.java:234) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:278) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 12 more | |||||||||
| node1 | 3m 14.235s | 2025-09-24 14:01:01.787 | 3225 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 3m 14.514s | 2025-09-24 14:01:02.066 | 3261 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 3m 14.605s | 2025-09-24 14:01:02.157 | 3263 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 3m 14.650s | 2025-09-24 14:01:02.202 | 3261 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 3m 14.718s | 2025-09-24 14:01:02.270 | 3264 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/275 | |
| node3 | 3m 14.719s | 2025-09-24 14:01:02.271 | 3265 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 275 | |
| node3 | 3m 14.815s | 2025-09-24 14:01:02.367 | 3300 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 275 | |
| node3 | 3m 14.818s | 2025-09-24 14:01:02.370 | 3301 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 275 Timestamp: 2025-09-24T14:01:00.620668173Z Next consensus number: 7236 Legacy running event hash: a16e970f01bfab574ec42ea02bb9a847aa921f0469109567d62e55ae5a8a437951bb627b45e3599dcc49f26dc5f32f0c Legacy running event mnemonic: spoil-bullet-dawn-news Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 862385813 Root hash: 1bc2c23382eeb49ee2314716c9455f3292bd9394b8adba7b4ba5c1a5d0daa5e100b40d6b0c06704ddc7eeeeeef4fda11 (root) ConsistencyTestingToolState / joke-rifle-pelican-hobby 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pave-twenty-choice-danger 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7382520480169518037 /3 leg-roast-erosion-tell 4 StringLeaf 274 /4 price-parrot-news-risk | |||||||||
| node3 | 3m 14.823s | 2025-09-24 14:01:02.375 | 3302 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 14.823s | 2025-09-24 14:01:02.375 | 3303 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 247 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 14.824s | 2025-09-24 14:01:02.376 | 3304 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 3m 14.829s | 2025-09-24 14:01:02.381 | 3305 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 3m 14.829s | 2025-09-24 14:01:02.381 | 3306 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 3m 15.013s | 2025-09-24 14:01:02.565 | 3266 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/275 | |
| node0 | 3m 15.014s | 2025-09-24 14:01:02.566 | 3267 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 275 | |
| node1 | 3m 15.045s | 2025-09-24 14:01:02.597 | 3238 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/275 | |
| node1 | 3m 15.046s | 2025-09-24 14:01:02.598 | 3239 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 275 | |
| node2 | 3m 15.085s | 2025-09-24 14:01:02.637 | 3264 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/275 | |
| node2 | 3m 15.086s | 2025-09-24 14:01:02.638 | 3265 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 275 | |
| node0 | 3m 15.111s | 2025-09-24 14:01:02.663 | 3302 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 275 | |
| node0 | 3m 15.113s | 2025-09-24 14:01:02.665 | 3303 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 275 Timestamp: 2025-09-24T14:01:00.620668173Z Next consensus number: 7236 Legacy running event hash: a16e970f01bfab574ec42ea02bb9a847aa921f0469109567d62e55ae5a8a437951bb627b45e3599dcc49f26dc5f32f0c Legacy running event mnemonic: spoil-bullet-dawn-news Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 862385813 Root hash: 1bc2c23382eeb49ee2314716c9455f3292bd9394b8adba7b4ba5c1a5d0daa5e100b40d6b0c06704ddc7eeeeeef4fda11 (root) ConsistencyTestingToolState / joke-rifle-pelican-hobby 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pave-twenty-choice-danger 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7382520480169518037 /3 leg-roast-erosion-tell 4 StringLeaf 274 /4 price-parrot-news-risk | |||||||||
| node0 | 3m 15.120s | 2025-09-24 14:01:02.672 | 3304 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 15.120s | 2025-09-24 14:01:02.672 | 3305 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 247 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 15.120s | 2025-09-24 14:01:02.672 | 3306 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 3m 15.125s | 2025-09-24 14:01:02.677 | 3307 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 3m 15.126s | 2025-09-24 14:01:02.678 | 3308 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 3m 15.137s | 2025-09-24 14:01:02.689 | 3270 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 275 | |
| node1 | 3m 15.139s | 2025-09-24 14:01:02.691 | 3271 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 275 Timestamp: 2025-09-24T14:01:00.620668173Z Next consensus number: 7236 Legacy running event hash: a16e970f01bfab574ec42ea02bb9a847aa921f0469109567d62e55ae5a8a437951bb627b45e3599dcc49f26dc5f32f0c Legacy running event mnemonic: spoil-bullet-dawn-news Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 862385813 Root hash: 1bc2c23382eeb49ee2314716c9455f3292bd9394b8adba7b4ba5c1a5d0daa5e100b40d6b0c06704ddc7eeeeeef4fda11 (root) ConsistencyTestingToolState / joke-rifle-pelican-hobby 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pave-twenty-choice-danger 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7382520480169518037 /3 leg-roast-erosion-tell 4 StringLeaf 274 /4 price-parrot-news-risk | |||||||||
| node1 | 3m 15.145s | 2025-09-24 14:01:02.697 | 3272 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 15.145s | 2025-09-24 14:01:02.697 | 3273 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 247 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 15.145s | 2025-09-24 14:01:02.697 | 3274 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 3m 15.151s | 2025-09-24 14:01:02.703 | 3275 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 3m 15.151s | 2025-09-24 14:01:02.703 | 3276 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 3m 15.181s | 2025-09-24 14:01:02.733 | 3300 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 275 | |
| node2 | 3m 15.183s | 2025-09-24 14:01:02.735 | 3301 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 275 Timestamp: 2025-09-24T14:01:00.620668173Z Next consensus number: 7236 Legacy running event hash: a16e970f01bfab574ec42ea02bb9a847aa921f0469109567d62e55ae5a8a437951bb627b45e3599dcc49f26dc5f32f0c Legacy running event mnemonic: spoil-bullet-dawn-news Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 862385813 Root hash: 1bc2c23382eeb49ee2314716c9455f3292bd9394b8adba7b4ba5c1a5d0daa5e100b40d6b0c06704ddc7eeeeeef4fda11 (root) ConsistencyTestingToolState / joke-rifle-pelican-hobby 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pave-twenty-choice-danger 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7382520480169518037 /3 leg-roast-erosion-tell 4 StringLeaf 274 /4 price-parrot-news-risk | |||||||||
| node2 | 3m 15.189s | 2025-09-24 14:01:02.741 | 3302 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 15.189s | 2025-09-24 14:01:02.741 | 3303 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 247 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 15.189s | 2025-09-24 14:01:02.741 | 3304 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 3m 15.194s | 2025-09-24 14:01:02.746 | 3305 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 3m 15.195s | 2025-09-24 14:01:02.747 | 3306 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 4m 14.100s | 2025-09-24 14:02:01.652 | 4393 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 369 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 4m 14.166s | 2025-09-24 14:02:01.718 | 4391 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 369 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 4m 14.369s | 2025-09-24 14:02:01.921 | 4361 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 369 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 4m 14.406s | 2025-09-24 14:02:01.958 | 4403 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 369 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 4m 14.555s | 2025-09-24 14:02:02.107 | 4406 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 369 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/369 | |
| node2 | 4m 14.556s | 2025-09-24 14:02:02.108 | 4407 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 369 | |
| node2 | 4m 14.642s | 2025-09-24 14:02:02.194 | 4442 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 369 | |
| node2 | 4m 14.645s | 2025-09-24 14:02:02.197 | 4443 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 369 Timestamp: 2025-09-24T14:02:00.301005208Z Next consensus number: 8777 Legacy running event hash: 7a1fb8a0983f1442195b830cc4a1519ad751eda697d42ddf34591cd2b39eb5e278c798cdb9bba5680d0697e567113d3d Legacy running event mnemonic: dumb-ugly-wealth-marriage Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 810912472 Root hash: 7f02db6016f150eb0ce9e1b017ed1f6a79536f64e3657f3ec3655f09066580b7ae736f99b815e3b3ef7f00220a724266 (root) ConsistencyTestingToolState / theory-neglect-taste-buddy 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 omit-empower-cool-unveil 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 6089408351749075165 /3 lumber-isolate-vivid-concert 4 StringLeaf 368 /4 rebel-promote-blame-test | |||||||||
| node3 | 4m 14.651s | 2025-09-24 14:02:02.203 | 4396 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 369 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/369 | |
| node2 | 4m 14.652s | 2025-09-24 14:02:02.204 | 4444 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 4m 14.652s | 2025-09-24 14:02:02.204 | 4445 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 342 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 4m 14.652s | 2025-09-24 14:02:02.204 | 4446 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 4m 14.652s | 2025-09-24 14:02:02.204 | 4397 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 369 | |
| node2 | 4m 14.658s | 2025-09-24 14:02:02.210 | 4447 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 4m 14.659s | 2025-09-24 14:02:02.211 | 4448 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 369 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/369 {"round":369,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/369/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 4m 14.721s | 2025-09-24 14:02:02.273 | 4404 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 369 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/369 | |
| node1 | 4m 14.721s | 2025-09-24 14:02:02.273 | 4364 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 369 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/369 | |
| node0 | 4m 14.722s | 2025-09-24 14:02:02.274 | 4405 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 369 | |
| node1 | 4m 14.722s | 2025-09-24 14:02:02.274 | 4365 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 369 | |
| node3 | 4m 14.743s | 2025-09-24 14:02:02.295 | 4428 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 369 | |
| node3 | 4m 14.745s | 2025-09-24 14:02:02.297 | 4429 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 369 Timestamp: 2025-09-24T14:02:00.301005208Z Next consensus number: 8777 Legacy running event hash: 7a1fb8a0983f1442195b830cc4a1519ad751eda697d42ddf34591cd2b39eb5e278c798cdb9bba5680d0697e567113d3d Legacy running event mnemonic: dumb-ugly-wealth-marriage Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 810912472 Root hash: 7f02db6016f150eb0ce9e1b017ed1f6a79536f64e3657f3ec3655f09066580b7ae736f99b815e3b3ef7f00220a724266 (root) ConsistencyTestingToolState / theory-neglect-taste-buddy 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 omit-empower-cool-unveil 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 6089408351749075165 /3 lumber-isolate-vivid-concert 4 StringLeaf 368 /4 rebel-promote-blame-test | |||||||||
| node3 | 4m 14.754s | 2025-09-24 14:02:02.306 | 4430 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 4m 14.754s | 2025-09-24 14:02:02.306 | 4431 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 342 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 4m 14.754s | 2025-09-24 14:02:02.306 | 4432 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 4m 14.762s | 2025-09-24 14:02:02.314 | 4433 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 4m 14.762s | 2025-09-24 14:02:02.314 | 4434 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 369 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/369 {"round":369,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/369/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 4m 14.811s | 2025-09-24 14:02:02.363 | 4408 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 369 | |
| node1 | 4m 14.813s | 2025-09-24 14:02:02.365 | 4409 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 369 Timestamp: 2025-09-24T14:02:00.301005208Z Next consensus number: 8777 Legacy running event hash: 7a1fb8a0983f1442195b830cc4a1519ad751eda697d42ddf34591cd2b39eb5e278c798cdb9bba5680d0697e567113d3d Legacy running event mnemonic: dumb-ugly-wealth-marriage Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 810912472 Root hash: 7f02db6016f150eb0ce9e1b017ed1f6a79536f64e3657f3ec3655f09066580b7ae736f99b815e3b3ef7f00220a724266 (root) ConsistencyTestingToolState / theory-neglect-taste-buddy 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 omit-empower-cool-unveil 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 6089408351749075165 /3 lumber-isolate-vivid-concert 4 StringLeaf 368 /4 rebel-promote-blame-test | |||||||||
| node1 | 4m 14.819s | 2025-09-24 14:02:02.371 | 4410 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 4m 14.820s | 2025-09-24 14:02:02.372 | 4436 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 369 | |
| node1 | 4m 14.820s | 2025-09-24 14:02:02.372 | 4411 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 342 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 4m 14.820s | 2025-09-24 14:02:02.372 | 4412 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 4m 14.824s | 2025-09-24 14:02:02.376 | 4437 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 369 Timestamp: 2025-09-24T14:02:00.301005208Z Next consensus number: 8777 Legacy running event hash: 7a1fb8a0983f1442195b830cc4a1519ad751eda697d42ddf34591cd2b39eb5e278c798cdb9bba5680d0697e567113d3d Legacy running event mnemonic: dumb-ugly-wealth-marriage Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 810912472 Root hash: 7f02db6016f150eb0ce9e1b017ed1f6a79536f64e3657f3ec3655f09066580b7ae736f99b815e3b3ef7f00220a724266 (root) ConsistencyTestingToolState / theory-neglect-taste-buddy 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 omit-empower-cool-unveil 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 6089408351749075165 /3 lumber-isolate-vivid-concert 4 StringLeaf 368 /4 rebel-promote-blame-test | |||||||||
| node1 | 4m 14.826s | 2025-09-24 14:02:02.378 | 4413 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 4m 14.827s | 2025-09-24 14:02:02.379 | 4414 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 369 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/369 {"round":369,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/369/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 4m 14.833s | 2025-09-24 14:02:02.385 | 4438 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 4m 14.833s | 2025-09-24 14:02:02.385 | 4439 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 342 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 4m 14.833s | 2025-09-24 14:02:02.385 | 4440 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 4m 14.840s | 2025-09-24 14:02:02.392 | 4441 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 4m 14.840s | 2025-09-24 14:02:02.392 | 4442 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 369 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/369 {"round":369,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/369/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 5m 13.938s | 2025-09-24 14:03:01.490 | 5417 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 459 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 5m 14.016s | 2025-09-24 14:03:01.568 | 5451 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 459 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 5m 14.080s | 2025-09-24 14:03:01.632 | 5413 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 459 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 5m 14.086s | 2025-09-24 14:03:01.638 | 5443 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 459 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 5m 14.361s | 2025-09-24 14:03:01.913 | 5446 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 459 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/459 | |
| node0 | 5m 14.362s | 2025-09-24 14:03:01.914 | 5447 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 459 | |
| node2 | 5m 14.427s | 2025-09-24 14:03:01.979 | 5454 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 459 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/459 | |
| node2 | 5m 14.428s | 2025-09-24 14:03:01.980 | 5455 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 459 | |
| node3 | 5m 14.432s | 2025-09-24 14:03:01.984 | 5430 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 459 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/459 | |
| node3 | 5m 14.433s | 2025-09-24 14:03:01.985 | 5431 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 459 | |
| node0 | 5m 14.449s | 2025-09-24 14:03:02.001 | 5478 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 459 | |
| node0 | 5m 14.451s | 2025-09-24 14:03:02.003 | 5479 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 459 Timestamp: 2025-09-24T14:03:00.154829Z Next consensus number: 10279 Legacy running event hash: 5c28f8904c1ebae4af3bf86699b1bf1ec8b7b704e0fbe528dc990c2a7c20fdf81decaf250ec0ba29255d9c07e766087a Legacy running event mnemonic: lunar-unlock-long-phone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -152018645 Root hash: 7bcc12c8fb879b50bb057b244325cf3e6f67bfe9fc443d4b690767e41bd080bf193a9c457f46e7e4b8c43dd874c9b500 (root) ConsistencyTestingToolState / kangaroo-flip-monkey-noodle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 brown-gallery-fever-stereo 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -6972449285584138836 /3 drill-subway-clip-embody 4 StringLeaf 458 /4 fringe-network-smile-cherry | |||||||||
| node0 | 5m 14.459s | 2025-09-24 14:03:02.011 | 5480 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 5m 14.459s | 2025-09-24 14:03:02.011 | 5481 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 432 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 5m 14.459s | 2025-09-24 14:03:02.011 | 5482 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 5m 14.467s | 2025-09-24 14:03:02.019 | 5483 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 5m 14.467s | 2025-09-24 14:03:02.019 | 5484 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 459 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/459 {"round":459,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/459/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 5m 14.469s | 2025-09-24 14:03:02.021 | 5485 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2 | |
| node1 | 5m 14.498s | 2025-09-24 14:03:02.050 | 5416 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 459 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/459 | |
| node1 | 5m 14.498s | 2025-09-24 14:03:02.050 | 5417 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 459 | |
| node2 | 5m 14.516s | 2025-09-24 14:03:02.068 | 5486 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 459 | |
| node2 | 5m 14.518s | 2025-09-24 14:03:02.070 | 5487 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 459 Timestamp: 2025-09-24T14:03:00.154829Z Next consensus number: 10279 Legacy running event hash: 5c28f8904c1ebae4af3bf86699b1bf1ec8b7b704e0fbe528dc990c2a7c20fdf81decaf250ec0ba29255d9c07e766087a Legacy running event mnemonic: lunar-unlock-long-phone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -152018645 Root hash: 7bcc12c8fb879b50bb057b244325cf3e6f67bfe9fc443d4b690767e41bd080bf193a9c457f46e7e4b8c43dd874c9b500 (root) ConsistencyTestingToolState / kangaroo-flip-monkey-noodle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 brown-gallery-fever-stereo 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -6972449285584138836 /3 drill-subway-clip-embody 4 StringLeaf 458 /4 fringe-network-smile-cherry | |||||||||
| node2 | 5m 14.524s | 2025-09-24 14:03:02.076 | 5488 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 5m 14.524s | 2025-09-24 14:03:02.076 | 5489 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 432 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 5m 14.524s | 2025-09-24 14:03:02.076 | 5490 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 5m 14.529s | 2025-09-24 14:03:02.081 | 5462 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 459 | |
| node3 | 5m 14.531s | 2025-09-24 14:03:02.083 | 5463 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 459 Timestamp: 2025-09-24T14:03:00.154829Z Next consensus number: 10279 Legacy running event hash: 5c28f8904c1ebae4af3bf86699b1bf1ec8b7b704e0fbe528dc990c2a7c20fdf81decaf250ec0ba29255d9c07e766087a Legacy running event mnemonic: lunar-unlock-long-phone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -152018645 Root hash: 7bcc12c8fb879b50bb057b244325cf3e6f67bfe9fc443d4b690767e41bd080bf193a9c457f46e7e4b8c43dd874c9b500 (root) ConsistencyTestingToolState / kangaroo-flip-monkey-noodle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 brown-gallery-fever-stereo 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -6972449285584138836 /3 drill-subway-clip-embody 4 StringLeaf 458 /4 fringe-network-smile-cherry | |||||||||
| node2 | 5m 14.532s | 2025-09-24 14:03:02.084 | 5491 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 5m 14.533s | 2025-09-24 14:03:02.085 | 5492 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 459 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/459 {"round":459,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/459/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 5m 14.534s | 2025-09-24 14:03:02.086 | 5493 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2 | |
| node3 | 5m 14.538s | 2025-09-24 14:03:02.090 | 5464 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 5m 14.538s | 2025-09-24 14:03:02.090 | 5465 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 432 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 5m 14.539s | 2025-09-24 14:03:02.091 | 5466 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 5m 14.546s | 2025-09-24 14:03:02.098 | 5467 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 5m 14.547s | 2025-09-24 14:03:02.099 | 5468 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 459 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/459 {"round":459,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/459/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 5m 14.549s | 2025-09-24 14:03:02.101 | 5469 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2 | |
| node1 | 5m 14.581s | 2025-09-24 14:03:02.133 | 5452 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 459 | |
| node1 | 5m 14.583s | 2025-09-24 14:03:02.135 | 5453 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 459 Timestamp: 2025-09-24T14:03:00.154829Z Next consensus number: 10279 Legacy running event hash: 5c28f8904c1ebae4af3bf86699b1bf1ec8b7b704e0fbe528dc990c2a7c20fdf81decaf250ec0ba29255d9c07e766087a Legacy running event mnemonic: lunar-unlock-long-phone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -152018645 Root hash: 7bcc12c8fb879b50bb057b244325cf3e6f67bfe9fc443d4b690767e41bd080bf193a9c457f46e7e4b8c43dd874c9b500 (root) ConsistencyTestingToolState / kangaroo-flip-monkey-noodle 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 brown-gallery-fever-stereo 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -6972449285584138836 /3 drill-subway-clip-embody 4 StringLeaf 458 /4 fringe-network-smile-cherry | |||||||||
| node1 | 5m 14.588s | 2025-09-24 14:03:02.140 | 5454 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 5m 14.588s | 2025-09-24 14:03:02.140 | 5455 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 432 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 5m 14.588s | 2025-09-24 14:03:02.140 | 5456 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 5m 14.596s | 2025-09-24 14:03:02.148 | 5457 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 5m 14.596s | 2025-09-24 14:03:02.148 | 5458 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 459 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/459 {"round":459,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/459/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 5m 14.598s | 2025-09-24 14:03:02.150 | 5459 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2 | |
| node4 | 5m 54.913s | 2025-09-24 14:03:42.465 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node4 | 5m 55.006s | 2025-09-24 14:03:42.558 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node4 | 5m 55.023s | 2025-09-24 14:03:42.575 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 5m 55.145s | 2025-09-24 14:03:42.697 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 5m 55.153s | 2025-09-24 14:03:42.705 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node4 | 5m 55.166s | 2025-09-24 14:03:42.718 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 5m 55.610s | 2025-09-24 14:03:43.162 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node4 | 5m 55.611s | 2025-09-24 14:03:43.163 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node4 | 5m 56.603s | 2025-09-24 14:03:44.155 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 992ms | |
| node4 | 5m 56.613s | 2025-09-24 14:03:44.165 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 5m 56.619s | 2025-09-24 14:03:44.171 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 5m 56.666s | 2025-09-24 14:03:44.218 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 5m 56.731s | 2025-09-24 14:03:44.283 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 5m 56.732s | 2025-09-24 14:03:44.284 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 5m 58.823s | 2025-09-24 14:03:46.375 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 5m 58.906s | 2025-09-24 14:03:46.458 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 58.913s | 2025-09-24 14:03:46.465 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | The following saved states were found on disk: | |
| - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/179/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/90/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2/SignedState.swh | |||||||||
| node4 | 5m 58.913s | 2025-09-24 14:03:46.465 | 22 | INFO | STARTUP | <main> | StartupStateUtils: | Loading latest state from disk. | |
| node4 | 5m 58.914s | 2025-09-24 14:03:46.466 | 23 | INFO | STARTUP | <main> | StartupStateUtils: | Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/179/SignedState.swh | |
| node4 | 5m 58.918s | 2025-09-24 14:03:46.470 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 5m 58.922s | 2025-09-24 14:03:46.474 | 25 | INFO | STATE_TO_DISK | <main> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp | |
| node4 | 5m 59.055s | 2025-09-24 14:03:46.607 | 36 | INFO | STARTUP | <main> | StartupStateUtils: | Loaded state's hash is the same as when it was saved. | |
| node4 | 5m 59.058s | 2025-09-24 14:03:46.610 | 37 | INFO | STARTUP | <main> | StartupStateUtils: | Platform has loaded a saved state {"round":179,"consensusTimestamp":"2025-09-24T14:00:00.458113Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload] | |
| node4 | 5m 59.061s | 2025-09-24 14:03:46.613 | 40 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 59.062s | 2025-09-24 14:03:46.614 | 43 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 5m 59.065s | 2025-09-24 14:03:46.617 | 44 | INFO | STARTUP | <main> | AddressBookInitializer: | Using the loaded state's address book and weight values. | |
| node4 | 5m 59.071s | 2025-09-24 14:03:46.623 | 45 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 59.073s | 2025-09-24 14:03:46.625 | 46 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6.002m | 2025-09-24 14:03:47.654 | 47 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26174033] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=226440, randomLong=-605507069321845794, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=7860, randomLong=-1312078530965761173, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1098969, data=35, exception=null] OS Health Check Report - Complete (took 1017 ms) | |||||||||
| node4 | 6.002m | 2025-09-24 14:03:47.680 | 48 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 6.004m | 2025-09-24 14:03:47.768 | 49 | INFO | STARTUP | <main> | PcesUtilities: | Span compaction completed for data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 273 | |
| node4 | 6.004m | 2025-09-24 14:03:47.771 | 50 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 6.004m | 2025-09-24 14:03:47.776 | 51 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 6.005m | 2025-09-24 14:03:47.845 | 52 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAAJQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IojrZw==", "port": 30125 }, { "ipAddressV4": "CoAAIQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "kpQ01Q==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "InrW4g==", "port": 30127 }, { "ipAddressV4": "CoAAJg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAAGQ==", "port": 30128 }] }] } | |||||||||
| node4 | 6.005m | 2025-09-24 14:03:47.864 | 53 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with state long 7335622976975747653. | |
| node4 | 6.005m | 2025-09-24 14:03:47.864 | 54 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with 178 rounds handled. | |
| node4 | 6.005m | 2025-09-24 14:03:47.864 | 55 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 6.005m | 2025-09-24 14:03:47.865 | 56 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 6m 1.135s | 2025-09-24 14:03:48.687 | 57 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 179 Timestamp: 2025-09-24T14:00:00.458113Z Next consensus number: 4713 Legacy running event hash: 76aae304ece1b3109811599af11e7c358e85acc5b06f08bd650dcf3d7aec1b6832c4854283c2454607560f60cd629982 Legacy running event mnemonic: stem-panel-glue-own Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2055293995 Root hash: ae4b2b6ef622f8c11e25ffaa1ff65b0494382d417948be188a61bc81fe5f8dc653232a3e553c98a499eea207a908c37b (root) ConsistencyTestingToolState / cute-horse-design-tree 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 path-tent-grace-elevator 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf 7335622976975747653 /3 panel-trend-zebra-danger 4 StringLeaf 178 /4 mirror-repeat-behave-void | |||||||||
| node4 | 6m 1.423s | 2025-09-24 14:03:48.975 | 59 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 76aae304ece1b3109811599af11e7c358e85acc5b06f08bd650dcf3d7aec1b6832c4854283c2454607560f60cd629982 | |
| node4 | 6m 1.438s | 2025-09-24 14:03:48.990 | 60 | INFO | STARTUP | <platformForkJoinThread-4> | Shadowgraph: | Shadowgraph starting from expiration threshold 152 | |
| node4 | 6m 1.449s | 2025-09-24 14:03:49.001 | 62 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 6m 1.450s | 2025-09-24 14:03:49.002 | 63 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 6m 1.452s | 2025-09-24 14:03:49.004 | 64 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 6m 1.456s | 2025-09-24 14:03:49.008 | 65 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 6m 1.457s | 2025-09-24 14:03:49.009 | 66 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 6m 1.458s | 2025-09-24 14:03:49.010 | 67 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 6m 1.461s | 2025-09-24 14:03:49.013 | 68 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 152 | |
| node4 | 6m 1.465s | 2025-09-24 14:03:49.017 | 69 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 231.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 6m 1.670s | 2025-09-24 14:03:49.222 | 70 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:4 H:e199a832848e BR:177), num remaining: 4 | |
| node4 | 6m 1.672s | 2025-09-24 14:03:49.224 | 71 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:b3d878497f83 BR:177), num remaining: 3 | |
| node4 | 6m 1.673s | 2025-09-24 14:03:49.225 | 72 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:aa52587579e3 BR:177), num remaining: 2 | |
| node4 | 6m 1.674s | 2025-09-24 14:03:49.226 | 73 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:7e1741fc9da4 BR:177), num remaining: 1 | |
| node4 | 6m 1.675s | 2025-09-24 14:03:49.227 | 74 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:3c12541fde25 BR:177), num remaining: 0 | |
| node4 | 6m 2.174s | 2025-09-24 14:03:49.726 | 776 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 3,232 preconsensus events with max birth round 273. These events contained 8,356 transactions. 93 rounds reached consensus spanning 57.9 seconds of consensus time. The latest round to reach consensus is round 272. Replay took 712.0 milliseconds. | |
| node4 | 6m 2.176s | 2025-09-24 14:03:49.728 | 777 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 6m 2.178s | 2025-09-24 14:03:49.730 | 778 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | DefaultStatusStateMachine: | Platform spent 709.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node4 | 6m 3.047s | 2025-09-24 14:03:50.599 | 925 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 868.0 ms in OBSERVING. Now in BEHIND | |
| node4 | 6m 3.048s | 2025-09-24 14:03:50.600 | 926 | INFO | RECONNECT | <platformForkJoinThread-8> | ReconnectController: | Starting ReconnectController | |
| node4 | 6m 3.049s | 2025-09-24 14:03:50.601 | 927 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Preparing for reconnect, stopping gossip | |
| node4 | 6m 3.049s | 2025-09-24 14:03:50.601 | 928 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Preparing for reconnect, start clearing queues | |
| node4 | 6m 3.051s | 2025-09-24 14:03:50.603 | 929 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Queues have been cleared | |
| node4 | 6m 3.052s | 2025-09-24 14:03:50.604 | 930 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectController: | waiting for reconnect connection | |
| node4 | 6m 3.052s | 2025-09-24 14:03:50.604 | 931 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectController: | acquired reconnect connection | |
| node1 | 6m 3.285s | 2025-09-24 14:03:50.837 | 6380 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectTeacher: | Starting reconnect in the role of the sender {"receiving":false,"nodeId":1,"otherNodeId":4,"round":537} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node1 | 6m 3.286s | 2025-09-24 14:03:50.838 | 6381 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectTeacher: | The following state will be sent to the learner: | |
| Round: 537 Timestamp: 2025-09-24T14:03:48.424126Z Next consensus number: 11514 Legacy running event hash: b3859ee9149d2b6f258078aeeaff0e44e179262e5f455e15c695e53b3f2a414510d552c6d3521454aad5ef170ccce246 Legacy running event mnemonic: month-carry-neck-hospital Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1197717645 Root hash: 20dd888e4b7f686a3c154717a24a93db0a75b51361cc85e39bb9e2253d5036075b27736f360aa5e3b06b580722f0a108 (root) ConsistencyTestingToolState / brick-unusual-husband-ask 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 toward-gold-plastic-clown 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -2326559670491910167 /3 chalk-mirror-history-once 4 StringLeaf 536 /4 number-roast-enable-rose | |||||||||
| node1 | 6m 3.287s | 2025-09-24 14:03:50.839 | 6382 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectTeacher: | Sending signatures from nodes 1, 2, 3 (signing weight = 37500000000/50000000000) for state hash 20dd888e4b7f686a3c154717a24a93db0a75b51361cc85e39bb9e2253d5036075b27736f360aa5e3b06b580722f0a108 | |
| node1 | 6m 3.287s | 2025-09-24 14:03:50.839 | 6383 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectTeacher: | Starting synchronization in the role of the sender. | |
| node1 | 6m 3.292s | 2025-09-24 14:03:50.844 | 6384 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | TeachingSynchronizer: | sending tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route [] | |
| node1 | 6m 3.302s | 2025-09-24 14:03:50.854 | 6385 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@4e7d3bca start run() | |
| node4 | 6m 3.351s | 2025-09-24 14:03:50.903 | 932 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":1,"round":271} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node4 | 6m 3.353s | 2025-09-24 14:03:50.905 | 933 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Receiving signed state signatures | |
| node4 | 6m 3.357s | 2025-09-24 14:03:50.909 | 934 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Received signatures from nodes 1, 2, 3 | |
| node4 | 6m 3.360s | 2025-09-24 14:03:50.912 | 935 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls receiveTree() | |
| node4 | 6m 3.361s | 2025-09-24 14:03:50.913 | 936 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | synchronizing tree | |
| node4 | 6m 3.361s | 2025-09-24 14:03:50.913 | 937 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | receiving tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route [] | |
| node4 | 6m 3.367s | 2025-09-24 14:03:50.919 | 938 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@5a315df6 start run() | |
| node4 | 6m 3.374s | 2025-09-24 14:03:50.926 | 939 | INFO | STARTUP | <<work group learning-synchronizer: async-input-stream #0>> | ConsistencyTestingToolState: | New State Constructed. | |
| node1 | 6m 3.456s | 2025-09-24 14:03:51.008 | 6399 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@4e7d3bca finish run() | |
| node1 | 6m 3.457s | 2025-09-24 14:03:51.009 | 6400 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | TeachingSynchronizer: | finished sending tree | |
| node1 | 6m 3.457s | 2025-09-24 14:03:51.009 | 6401 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | TeachingSynchronizer: | sending tree rooted at com.swirlds.virtualmap.VirtualMap with route [2] | |
| node1 | 6m 3.458s | 2025-09-24 14:03:51.010 | 6402 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@56e91b46 start run() | |
| node4 | 6m 3.578s | 2025-09-24 14:03:51.130 | 963 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread finished the learning loop for the current subtree | |
| node4 | 6m 3.579s | 2025-09-24 14:03:51.131 | 964 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread closed input, output, and view for the current subtree | |
| node4 | 6m 3.580s | 2025-09-24 14:03:51.132 | 965 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@5a315df6 finish run() | |
| node4 | 6m 3.581s | 2025-09-24 14:03:51.133 | 966 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | received tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route [] | |
| node4 | 6m 3.582s | 2025-09-24 14:03:51.134 | 967 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route [2] | |
| node4 | 6m 3.586s | 2025-09-24 14:03:51.138 | 968 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@f0af8a1 start run() | |
| node4 | 6m 3.643s | 2025-09-24 14:03:51.195 | 969 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): firstLeafPath: 1 -> 1, lastLeafPath: 1 -> 1 | |
| node4 | 6m 3.644s | 2025-09-24 14:03:51.196 | 970 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): done | |
| node4 | 6m 3.647s | 2025-09-24 14:03:51.199 | 971 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread finished the learning loop for the current subtree | |
| node4 | 6m 3.647s | 2025-09-24 14:03:51.199 | 972 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call nodeRemover.allNodesReceived() | |
| node4 | 6m 3.648s | 2025-09-24 14:03:51.200 | 973 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived() | |
| node4 | 6m 3.648s | 2025-09-24 14:03:51.200 | 974 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived(): done | |
| node4 | 6m 3.649s | 2025-09-24 14:03:51.201 | 975 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call root.endLearnerReconnect() | |
| node4 | 6m 3.649s | 2025-09-24 14:03:51.201 | 976 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call reconnectIterator.close() | |
| node4 | 6m 3.649s | 2025-09-24 14:03:51.201 | 977 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call setHashPrivate() | |
| node1 | 6m 3.716s | 2025-09-24 14:03:51.268 | 6409 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@56e91b46 finish run() | |
| node1 | 6m 3.717s | 2025-09-24 14:03:51.269 | 6410 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | TeachingSynchronizer: | finished sending tree | |
| node1 | 6m 3.720s | 2025-09-24 14:03:51.272 | 6413 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectTeacher: | Finished synchronization in the role of the sender. | |
| node4 | 6m 3.813s | 2025-09-24 14:03:51.365 | 987 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call postInit() | |
| node4 | 6m 3.814s | 2025-09-24 14:03:51.366 | 989 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | endLearnerReconnect() complete | |
| node4 | 6m 3.815s | 2025-09-24 14:03:51.367 | 990 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | close() complete | |
| node4 | 6m 3.815s | 2025-09-24 14:03:51.367 | 991 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread closed input, output, and view for the current subtree | |
| node4 | 6m 3.815s | 2025-09-24 14:03:51.367 | 992 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@f0af8a1 finish run() | |
| node4 | 6m 3.816s | 2025-09-24 14:03:51.368 | 993 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | received tree rooted at com.swirlds.virtualmap.VirtualMap with route [2] | |
| node4 | 6m 3.816s | 2025-09-24 14:03:51.368 | 994 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | synchronization complete | |
| node4 | 6m 3.817s | 2025-09-24 14:03:51.369 | 995 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls initialize() | |
| node4 | 6m 3.817s | 2025-09-24 14:03:51.369 | 996 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | initializing tree | |
| node4 | 6m 3.818s | 2025-09-24 14:03:51.370 | 997 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | initialization complete | |
| node4 | 6m 3.819s | 2025-09-24 14:03:51.371 | 998 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls hash() | |
| node4 | 6m 3.819s | 2025-09-24 14:03:51.371 | 999 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | hashing tree | |
| node4 | 6m 3.823s | 2025-09-24 14:03:51.375 | 1000 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | hashing complete | |
| node4 | 6m 3.824s | 2025-09-24 14:03:51.376 | 1001 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls logStatistics() | |
| node4 | 6m 3.832s | 2025-09-24 14:03:51.384 | 1002 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | Finished synchronization {"timeInSeconds":0.455,"hashTimeInSeconds":0.004,"initializationTimeInSeconds":0.0,"totalNodes":12,"leafNodes":7,"redundantLeafNodes":4,"internalNodes":5,"redundantInternalNodes":2} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload] | |
| node4 | 6m 3.833s | 2025-09-24 14:03:51.385 | 1003 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | ReconnectMapMetrics: transfersFromTeacher=12; transfersFromLearner=10; internalHashes=0; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=3; leafCleanHashes=3; leafData=7; leafCleanData=4 | |
| node4 | 6m 3.833s | 2025-09-24 14:03:51.385 | 1004 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner is done synchronizing | |
| node4 | 6m 3.838s | 2025-09-24 14:03:51.390 | 1005 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Reconnect data usage report {"dataMegabytes":0.0060558319091796875} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload] | |
| node4 | 6m 3.844s | 2025-09-24 14:03:51.396 | 1006 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":1,"round":537,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 6m 3.846s | 2025-09-24 14:03:51.398 | 1007 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Information for state received during reconnect: | |
| Round: 537 Timestamp: 2025-09-24T14:03:48.424126Z Next consensus number: 11514 Legacy running event hash: b3859ee9149d2b6f258078aeeaff0e44e179262e5f455e15c695e53b3f2a414510d552c6d3521454aad5ef170ccce246 Legacy running event mnemonic: month-carry-neck-hospital Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1197717645 Root hash: 20dd888e4b7f686a3c154717a24a93db0a75b51361cc85e39bb9e2253d5036075b27736f360aa5e3b06b580722f0a108 (root) ConsistencyTestingToolState / brick-unusual-husband-ask 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 toward-gold-plastic-clown 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -2326559670491910167 /3 chalk-mirror-history-once 4 StringLeaf 536 /4 number-roast-enable-rose | |||||||||
| node4 | 6m 3.847s | 2025-09-24 14:03:51.399 | 1009 | DEBUG | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectStateLoader: | `loadReconnectState` : reloading state | |
| node4 | 6m 3.847s | 2025-09-24 14:03:51.399 | 1010 | INFO | STARTUP | <<reconnect: reconnect-controller>> | ConsistencyTestingToolState: | State initialized with state long -2326559670491910167. | |
| node4 | 6m 3.847s | 2025-09-24 14:03:51.399 | 1011 | INFO | STARTUP | <<reconnect: reconnect-controller>> | ConsistencyTestingToolState: | State initialized with 536 rounds handled. | |
| node4 | 6m 3.848s | 2025-09-24 14:03:51.400 | 1012 | INFO | STARTUP | <<reconnect: reconnect-controller>> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 6m 3.848s | 2025-09-24 14:03:51.400 | 1013 | INFO | STARTUP | <<reconnect: reconnect-controller>> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 6m 3.882s | 2025-09-24 14:03:51.434 | 1020 | INFO | STATE_TO_DISK | <<reconnect: reconnect-controller>> | DefaultSavedStateController: | Signed state from round 537 created, will eventually be written to disk, for reason: RECONNECT | |
| node4 | 6m 3.883s | 2025-09-24 14:03:51.435 | 1021 | INFO | PLATFORM_STATUS | <platformForkJoinThread-8> | DefaultStatusStateMachine: | Platform spent 834.0 ms in BEHIND. Now in RECONNECT_COMPLETE | |
| node4 | 6m 3.884s | 2025-09-24 14:03:51.436 | 1023 | INFO | STARTUP | <platformForkJoinThread-5> | Shadowgraph: | Shadowgraph starting from expiration threshold 510 | |
| node4 | 6m 3.886s | 2025-09-24 14:03:51.438 | 1025 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 537 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/537 | |
| node4 | 6m 3.888s | 2025-09-24 14:03:51.440 | 1026 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/2 for round 537 | |
| node4 | 6m 3.900s | 2025-09-24 14:03:51.452 | 1036 | INFO | EVENT_STREAM | <<reconnect: reconnect-controller>> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: b3859ee9149d2b6f258078aeeaff0e44e179262e5f455e15c695e53b3f2a414510d552c6d3521454aad5ef170ccce246 | |
| node4 | 6m 3.902s | 2025-09-24 14:03:51.454 | 1037 | INFO | STARTUP | <platformForkJoinThread-5> | PcesFileManager: | Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr273_orgn0.pces. All future files will have an origin round of 537. | |
| node1 | 6m 3.913s | 2025-09-24 14:03:51.465 | 6427 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectTeacher: | Finished reconnect in the role of the sender. {"receiving":false,"nodeId":1,"otherNodeId":4,"round":537,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 6m 4.044s | 2025-09-24 14:03:51.596 | 1060 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/2 for round 537 | |
| node4 | 6m 4.048s | 2025-09-24 14:03:51.600 | 1061 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 537 Timestamp: 2025-09-24T14:03:48.424126Z Next consensus number: 11514 Legacy running event hash: b3859ee9149d2b6f258078aeeaff0e44e179262e5f455e15c695e53b3f2a414510d552c6d3521454aad5ef170ccce246 Legacy running event mnemonic: month-carry-neck-hospital Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1197717645 Root hash: 20dd888e4b7f686a3c154717a24a93db0a75b51361cc85e39bb9e2253d5036075b27736f360aa5e3b06b580722f0a108 (root) ConsistencyTestingToolState / brick-unusual-husband-ask 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 toward-gold-plastic-clown 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -2326559670491910167 /3 chalk-mirror-history-once 4 StringLeaf 536 /4 number-roast-enable-rose | |||||||||
| node4 | 6m 4.099s | 2025-09-24 14:03:51.651 | 1073 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr273_orgn0.pces | |||||||||
| node4 | 6m 4.099s | 2025-09-24 14:03:51.651 | 1074 | WARN | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | No preconsensus event files meeting specified criteria found to copy. Lower bound: 510 | |
| node4 | 6m 4.107s | 2025-09-24 14:03:51.659 | 1075 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 537 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/537 {"round":537,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/537/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 6m 4.110s | 2025-09-24 14:03:51.662 | 1076 | INFO | PLATFORM_STATUS | <platformForkJoinThread-8> | DefaultStatusStateMachine: | Platform spent 226.0 ms in RECONNECT_COMPLETE. Now in CHECKING | |
| node4 | 6m 4.461s | 2025-09-24 14:03:52.013 | 1077 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 6m 4.464s | 2025-09-24 14:03:52.016 | 1078 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 6m 4.842s | 2025-09-24 14:03:52.394 | 1079 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:22659f77d1d4 BR:535), num remaining: 3 | |
| node4 | 6m 4.845s | 2025-09-24 14:03:52.397 | 1080 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:d32bb45509c1 BR:535), num remaining: 2 | |
| node4 | 6m 4.846s | 2025-09-24 14:03:52.398 | 1081 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:bb98af845adf BR:536), num remaining: 1 | |
| node4 | 6m 4.846s | 2025-09-24 14:03:52.398 | 1082 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:31b69402586f BR:536), num remaining: 0 | |
| node4 | 6m 8.249s | 2025-09-24 14:03:55.801 | 1182 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | DefaultStatusStateMachine: | Platform spent 4.1 s in CHECKING. Now in ACTIVE | |
| node0 | 6m 14.315s | 2025-09-24 14:04:01.867 | 6615 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 556 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 6m 14.327s | 2025-09-24 14:04:01.879 | 6601 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 556 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 6m 14.350s | 2025-09-24 14:04:01.902 | 6593 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 556 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 6m 14.429s | 2025-09-24 14:04:01.981 | 6614 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 556 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 6m 14.491s | 2025-09-24 14:04:02.043 | 1267 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 556 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 6m 14.602s | 2025-09-24 14:04:02.154 | 6606 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 556 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/556 | |
| node2 | 6m 14.603s | 2025-09-24 14:04:02.155 | 6607 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 556 | |
| node4 | 6m 14.680s | 2025-09-24 14:04:02.232 | 1269 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 556 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/556 | |
| node2 | 6m 14.681s | 2025-09-24 14:04:02.233 | 6640 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 556 | |
| node4 | 6m 14.681s | 2025-09-24 14:04:02.233 | 1270 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 556 | |
| node2 | 6m 14.683s | 2025-09-24 14:04:02.235 | 6641 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 556 Timestamp: 2025-09-24T14:04:00.061510Z Next consensus number: 11894 Legacy running event hash: 5a604f9dd9243aa10c8a75fd8bcb4705ac934733b945b3d7aa2cdd5ceed6a7e339073f2ed06ca1da7576fe6e1bf076ed Legacy running event mnemonic: equal-victory-swear-feature Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 342243806 Root hash: 19f72c0104efa5bb38f5d5fbb13f85d08586742292037ad582c91f0964013ed60bc0d824f737fc37092fe99abfff0c6c (root) ConsistencyTestingToolState / discover-lens-over-emerge 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pink-tunnel-inflict-monkey 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -5717273091204940513 /3 core-magnet-artist-message 4 StringLeaf 555 /4 jewel-then-brisk-patient | |||||||||
| node2 | 6m 14.689s | 2025-09-24 14:04:02.241 | 6642 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T14+03+28.295997975Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 6m 14.689s | 2025-09-24 14:04:02.241 | 6643 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 529 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T14+03+28.295997975Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 6m 14.689s | 2025-09-24 14:04:02.241 | 6644 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 6m 14.690s | 2025-09-24 14:04:02.242 | 6645 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 6m 14.690s | 2025-09-24 14:04:02.242 | 6646 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 556 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/556 {"round":556,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/556/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 6m 14.692s | 2025-09-24 14:04:02.244 | 6647 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/90 | |
| node3 | 6m 14.780s | 2025-09-24 14:04:02.332 | 6596 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 556 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/556 | |
| node3 | 6m 14.781s | 2025-09-24 14:04:02.333 | 6597 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 556 | |
| node4 | 6m 14.791s | 2025-09-24 14:04:02.343 | 1306 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 556 | |
| node4 | 6m 14.794s | 2025-09-24 14:04:02.346 | 1307 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 556 Timestamp: 2025-09-24T14:04:00.061510Z Next consensus number: 11894 Legacy running event hash: 5a604f9dd9243aa10c8a75fd8bcb4705ac934733b945b3d7aa2cdd5ceed6a7e339073f2ed06ca1da7576fe6e1bf076ed Legacy running event mnemonic: equal-victory-swear-feature Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 342243806 Root hash: 19f72c0104efa5bb38f5d5fbb13f85d08586742292037ad582c91f0964013ed60bc0d824f737fc37092fe99abfff0c6c (root) ConsistencyTestingToolState / discover-lens-over-emerge 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pink-tunnel-inflict-monkey 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -5717273091204940513 /3 core-magnet-artist-message 4 StringLeaf 555 /4 jewel-then-brisk-patient | |||||||||
| node4 | 6m 14.804s | 2025-09-24 14:04:02.356 | 1308 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T14+03+51.968556558Z_seq1_minr510_maxr1010_orgn537.pces Last file: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr273_orgn0.pces | |||||||||
| node4 | 6m 14.805s | 2025-09-24 14:04:02.357 | 1309 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 529 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T14+03+51.968556558Z_seq1_minr510_maxr1010_orgn537.pces | |||||||||
| node4 | 6m 14.805s | 2025-09-24 14:04:02.357 | 1310 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 6m 14.807s | 2025-09-24 14:04:02.359 | 1311 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 6m 14.807s | 2025-09-24 14:04:02.359 | 1312 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 556 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/556 {"round":556,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/556/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 6m 14.810s | 2025-09-24 14:04:02.362 | 6619 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 556 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/556 | |
| node1 | 6m 14.813s | 2025-09-24 14:04:02.365 | 6620 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/38 for round 556 | |
| node0 | 6m 14.835s | 2025-09-24 14:04:02.387 | 6620 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 556 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/556 | |
| node0 | 6m 14.836s | 2025-09-24 14:04:02.388 | 6621 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 556 | |
| node3 | 6m 14.875s | 2025-09-24 14:04:02.427 | 6638 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 556 | |
| node3 | 6m 14.878s | 2025-09-24 14:04:02.430 | 6639 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 556 Timestamp: 2025-09-24T14:04:00.061510Z Next consensus number: 11894 Legacy running event hash: 5a604f9dd9243aa10c8a75fd8bcb4705ac934733b945b3d7aa2cdd5ceed6a7e339073f2ed06ca1da7576fe6e1bf076ed Legacy running event mnemonic: equal-victory-swear-feature Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 342243806 Root hash: 19f72c0104efa5bb38f5d5fbb13f85d08586742292037ad582c91f0964013ed60bc0d824f737fc37092fe99abfff0c6c (root) ConsistencyTestingToolState / discover-lens-over-emerge 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pink-tunnel-inflict-monkey 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -5717273091204940513 /3 core-magnet-artist-message 4 StringLeaf 555 /4 jewel-then-brisk-patient | |||||||||
| node3 | 6m 14.885s | 2025-09-24 14:04:02.437 | 6640 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T14+03+28.229097993Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 6m 14.885s | 2025-09-24 14:04:02.437 | 6641 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 529 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T14+03+28.229097993Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 6m 14.885s | 2025-09-24 14:04:02.437 | 6642 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 6m 14.886s | 2025-09-24 14:04:02.438 | 6643 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 6m 14.886s | 2025-09-24 14:04:02.438 | 6644 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 556 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/556 {"round":556,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/556/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 6m 14.888s | 2025-09-24 14:04:02.440 | 6645 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/90 | |
| node1 | 6m 14.899s | 2025-09-24 14:04:02.451 | 6666 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/38 for round 556 | |
| node1 | 6m 14.901s | 2025-09-24 14:04:02.453 | 6667 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 556 Timestamp: 2025-09-24T14:04:00.061510Z Next consensus number: 11894 Legacy running event hash: 5a604f9dd9243aa10c8a75fd8bcb4705ac934733b945b3d7aa2cdd5ceed6a7e339073f2ed06ca1da7576fe6e1bf076ed Legacy running event mnemonic: equal-victory-swear-feature Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 342243806 Root hash: 19f72c0104efa5bb38f5d5fbb13f85d08586742292037ad582c91f0964013ed60bc0d824f737fc37092fe99abfff0c6c (root) ConsistencyTestingToolState / discover-lens-over-emerge 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pink-tunnel-inflict-monkey 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -5717273091204940513 /3 core-magnet-artist-message 4 StringLeaf 555 /4 jewel-then-brisk-patient | |||||||||
| node1 | 6m 14.907s | 2025-09-24 14:04:02.459 | 6668 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T14+03+28.424636234Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 6m 14.907s | 2025-09-24 14:04:02.459 | 6669 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 529 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T14+03+28.424636234Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 6m 14.907s | 2025-09-24 14:04:02.459 | 6670 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 6m 14.908s | 2025-09-24 14:04:02.460 | 6671 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 6m 14.909s | 2025-09-24 14:04:02.461 | 6672 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 556 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/556 {"round":556,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/556/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 6m 14.910s | 2025-09-24 14:04:02.462 | 6673 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/90 | |
| node0 | 6m 14.923s | 2025-09-24 14:04:02.475 | 6662 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 556 | |
| node0 | 6m 14.925s | 2025-09-24 14:04:02.477 | 6663 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 556 Timestamp: 2025-09-24T14:04:00.061510Z Next consensus number: 11894 Legacy running event hash: 5a604f9dd9243aa10c8a75fd8bcb4705ac934733b945b3d7aa2cdd5ceed6a7e339073f2ed06ca1da7576fe6e1bf076ed Legacy running event mnemonic: equal-victory-swear-feature Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 342243806 Root hash: 19f72c0104efa5bb38f5d5fbb13f85d08586742292037ad582c91f0964013ed60bc0d824f737fc37092fe99abfff0c6c (root) ConsistencyTestingToolState / discover-lens-over-emerge 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 pink-tunnel-inflict-monkey 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -5717273091204940513 /3 core-magnet-artist-message 4 StringLeaf 555 /4 jewel-then-brisk-patient | |||||||||
| node0 | 6m 14.932s | 2025-09-24 14:04:02.484 | 6664 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T14+03+28.354793027Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 6m 14.932s | 2025-09-24 14:04:02.484 | 6665 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 529 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T14+03+28.354793027Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 6m 14.932s | 2025-09-24 14:04:02.484 | 6666 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 6m 14.933s | 2025-09-24 14:04:02.485 | 6667 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 6m 14.934s | 2025-09-24 14:04:02.486 | 6668 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 556 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/556 {"round":556,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/556/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 6m 14.935s | 2025-09-24 14:04:02.487 | 6669 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/90 | |
| node2 | 7m 13.846s | 2025-09-24 14:05:01.398 | 7782 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 654 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 7m 13.986s | 2025-09-24 14:05:01.538 | 2407 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 654 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 7m 14.010s | 2025-09-24 14:05:01.562 | 7750 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 654 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 7m 14.054s | 2025-09-24 14:05:01.606 | 7774 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 654 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 7m 14.122s | 2025-09-24 14:05:01.674 | 7777 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 654 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 7m 14.283s | 2025-09-24 14:05:01.835 | 7795 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 654 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/654 | |
| node2 | 7m 14.283s | 2025-09-24 14:05:01.835 | 7796 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 654 | |
| node0 | 7m 14.294s | 2025-09-24 14:05:01.846 | 7777 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 654 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/654 | |
| node0 | 7m 14.295s | 2025-09-24 14:05:01.847 | 7778 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 654 | |
| node2 | 7m 14.365s | 2025-09-24 14:05:01.917 | 7827 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 654 | |
| node2 | 7m 14.367s | 2025-09-24 14:05:01.919 | 7828 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 654 Timestamp: 2025-09-24T14:05:00.191122210Z Next consensus number: 14371 Legacy running event hash: afed44e1cf88491a326bbebe5ececdcc69ef7e1ee52b1462155909c4440dd389a2df6beba300ba6d77b968b924e05f55 Legacy running event mnemonic: act-order-audit-cash Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -965155802 Root hash: 550f1c8f472423e5bdc09b83516ffe0c74defcbe4733ada96413f034857728cf4dd667d12c187e8929adb5cf1de1022a (root) ConsistencyTestingToolState / peanut-music-tonight-lake 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lava-federal-surprise-employ 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -2843503239459396820 /3 travel-keep-act-virus 4 StringLeaf 653 /4 practice-relief-athlete-cloth | |||||||||
| node2 | 7m 14.375s | 2025-09-24 14:05:01.927 | 7837 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T14+03+28.295997975Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T13+58+03.857763022Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 7m 14.376s | 2025-09-24 14:05:01.928 | 7838 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 627 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T14+03+28.295997975Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 7m 14.376s | 2025-09-24 14:05:01.928 | 7839 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 7m 14.378s | 2025-09-24 14:05:01.930 | 7840 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 7m 14.379s | 2025-09-24 14:05:01.931 | 7841 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 654 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/654 {"round":654,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/654/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 7m 14.380s | 2025-09-24 14:05:01.932 | 7842 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/179 | |
| node0 | 7m 14.383s | 2025-09-24 14:05:01.935 | 7809 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 654 | |
| node0 | 7m 14.385s | 2025-09-24 14:05:01.937 | 7810 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 654 Timestamp: 2025-09-24T14:05:00.191122210Z Next consensus number: 14371 Legacy running event hash: afed44e1cf88491a326bbebe5ececdcc69ef7e1ee52b1462155909c4440dd389a2df6beba300ba6d77b968b924e05f55 Legacy running event mnemonic: act-order-audit-cash Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -965155802 Root hash: 550f1c8f472423e5bdc09b83516ffe0c74defcbe4733ada96413f034857728cf4dd667d12c187e8929adb5cf1de1022a (root) ConsistencyTestingToolState / peanut-music-tonight-lake 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lava-federal-surprise-employ 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -2843503239459396820 /3 travel-keep-act-virus 4 StringLeaf 653 /4 practice-relief-athlete-cloth | |||||||||
| node0 | 7m 14.393s | 2025-09-24 14:05:01.945 | 7811 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T14+03+28.354793027Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T13+58+03.971035961Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 7m 14.394s | 2025-09-24 14:05:01.946 | 7812 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 627 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T14+03+28.354793027Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 7m 14.394s | 2025-09-24 14:05:01.946 | 7813 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 7m 14.396s | 2025-09-24 14:05:01.948 | 7814 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 7m 14.397s | 2025-09-24 14:05:01.949 | 7815 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 654 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/654 {"round":654,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/654/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 7m 14.398s | 2025-09-24 14:05:01.950 | 7816 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/179 | |
| node3 | 7m 14.420s | 2025-09-24 14:05:01.972 | 7753 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 654 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/654 | |
| node3 | 7m 14.421s | 2025-09-24 14:05:01.973 | 7754 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 654 | |
| node4 | 7m 14.491s | 2025-09-24 14:05:02.043 | 2420 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 654 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/654 | |
| node4 | 7m 14.492s | 2025-09-24 14:05:02.044 | 2421 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/14 for round 654 | |
| node3 | 7m 14.520s | 2025-09-24 14:05:02.072 | 7785 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 654 | |
| node3 | 7m 14.522s | 2025-09-24 14:05:02.074 | 7786 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 654 Timestamp: 2025-09-24T14:05:00.191122210Z Next consensus number: 14371 Legacy running event hash: afed44e1cf88491a326bbebe5ececdcc69ef7e1ee52b1462155909c4440dd389a2df6beba300ba6d77b968b924e05f55 Legacy running event mnemonic: act-order-audit-cash Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -965155802 Root hash: 550f1c8f472423e5bdc09b83516ffe0c74defcbe4733ada96413f034857728cf4dd667d12c187e8929adb5cf1de1022a (root) ConsistencyTestingToolState / peanut-music-tonight-lake 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lava-federal-surprise-employ 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -2843503239459396820 /3 travel-keep-act-virus 4 StringLeaf 653 /4 practice-relief-athlete-cloth | |||||||||
| node3 | 7m 14.531s | 2025-09-24 14:05:02.083 | 7787 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T14+03+28.229097993Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T13+58+04.081642292Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 7m 14.531s | 2025-09-24 14:05:02.083 | 7788 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 627 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T14+03+28.229097993Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 7m 14.531s | 2025-09-24 14:05:02.083 | 7789 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 7m 14.534s | 2025-09-24 14:05:02.086 | 7790 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 7m 14.534s | 2025-09-24 14:05:02.086 | 7791 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 654 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/654 {"round":654,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/654/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 7m 14.536s | 2025-09-24 14:05:02.088 | 7780 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 654 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/654 | |
| node3 | 7m 14.536s | 2025-09-24 14:05:02.088 | 7792 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/179 | |
| node1 | 7m 14.538s | 2025-09-24 14:05:02.090 | 7781 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/44 for round 654 | |
| node4 | 7m 14.605s | 2025-09-24 14:05:02.157 | 2455 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/14 for round 654 | |
| node4 | 7m 14.607s | 2025-09-24 14:05:02.159 | 2456 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 654 Timestamp: 2025-09-24T14:05:00.191122210Z Next consensus number: 14371 Legacy running event hash: afed44e1cf88491a326bbebe5ececdcc69ef7e1ee52b1462155909c4440dd389a2df6beba300ba6d77b968b924e05f55 Legacy running event mnemonic: act-order-audit-cash Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -965155802 Root hash: 550f1c8f472423e5bdc09b83516ffe0c74defcbe4733ada96413f034857728cf4dd667d12c187e8929adb5cf1de1022a (root) ConsistencyTestingToolState / peanut-music-tonight-lake 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lava-federal-surprise-employ 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -2843503239459396820 /3 travel-keep-act-virus 4 StringLeaf 653 /4 practice-relief-athlete-cloth | |||||||||
| node4 | 7m 14.616s | 2025-09-24 14:05:02.168 | 2457 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T14+03+51.968556558Z_seq1_minr510_maxr1010_orgn537.pces Last file: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T13+58+04.141770069Z_seq0_minr1_maxr273_orgn0.pces | |||||||||
| node4 | 7m 14.616s | 2025-09-24 14:05:02.168 | 2458 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 627 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T14+03+51.968556558Z_seq1_minr510_maxr1010_orgn537.pces | |||||||||
| node4 | 7m 14.616s | 2025-09-24 14:05:02.168 | 2459 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 7m 14.619s | 2025-09-24 14:05:02.171 | 2460 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 7m 14.619s | 2025-09-24 14:05:02.171 | 2461 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 654 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/654 {"round":654,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/654/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 7m 14.621s | 2025-09-24 14:05:02.173 | 2462 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2 | |
| node1 | 7m 14.633s | 2025-09-24 14:05:02.185 | 7824 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/44 for round 654 | |
| node1 | 7m 14.635s | 2025-09-24 14:05:02.187 | 7825 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 654 Timestamp: 2025-09-24T14:05:00.191122210Z Next consensus number: 14371 Legacy running event hash: afed44e1cf88491a326bbebe5ececdcc69ef7e1ee52b1462155909c4440dd389a2df6beba300ba6d77b968b924e05f55 Legacy running event mnemonic: act-order-audit-cash Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -965155802 Root hash: 550f1c8f472423e5bdc09b83516ffe0c74defcbe4733ada96413f034857728cf4dd667d12c187e8929adb5cf1de1022a (root) ConsistencyTestingToolState / peanut-music-tonight-lake 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lava-federal-surprise-employ 1 SingletonNode RosterService.ROSTER_STATE /1 loyal-judge-need-also 2 VirtualMap RosterService.ROSTERS /2 deposit-patch-rack-already 3 StringLeaf -2843503239459396820 /3 travel-keep-act-virus 4 StringLeaf 653 /4 practice-relief-athlete-cloth | |||||||||
| node1 | 7m 14.643s | 2025-09-24 14:05:02.195 | 7826 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T13+58+04.123063967Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T14+03+28.424636234Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 7m 14.643s | 2025-09-24 14:05:02.195 | 7827 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 627 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T14+03+28.424636234Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 7m 14.643s | 2025-09-24 14:05:02.195 | 7828 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 7m 14.646s | 2025-09-24 14:05:02.198 | 7829 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 7m 14.647s | 2025-09-24 14:05:02.199 | 7830 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 654 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/654 {"round":654,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/654/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 7m 14.648s | 2025-09-24 14:05:02.200 | 7831 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/179 | |
| node3 | 7m 57.070s | 2025-09-24 14:05:44.622 | 8512 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith2 3 to 2>> | NetworkUtils: | Connection broken: 3 <- 2 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:05:44.621220143Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:05:44.621220143Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:241) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:201) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:278) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 12 more | |||||||||
| node4 | 7m 57.074s | 2025-09-24 14:05:44.626 | 3175 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith2 4 to 2>> | NetworkUtils: | Connection broken: 4 <- 2 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:05:44.623718943Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:05:44.623718943Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:241) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:201) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:278) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 12 more | |||||||||
| node3 | 7m 57.120s | 2025-09-24 14:05:44.672 | 8513 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith0 3 to 0>> | NetworkUtils: | Connection broken: 3 <- 0 | |
| java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.FilterInputStream.read(FilterInputStream.java:71) at org.hiero.base.io.streams.AugmentedDataInputStream.read(AugmentedDataInputStream.java:57) at com.swirlds.platform.network.communication.states.WaitForAcceptReject.transition(WaitForAcceptReject.java:48) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) | |||||||||
| node4 | 7m 57.121s | 2025-09-24 14:05:44.673 | 3176 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith0 4 to 0>> | NetworkUtils: | Connection broken: 4 <- 0 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:05:44.672787463Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T14:05:44.672787463Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:241) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:201) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:278) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 12 more | |||||||||
| node3 | 7m 57.194s | 2025-09-24 14:05:44.746 | 8514 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith1 3 to 1>> | NetworkUtils: | Connection broken: 3 <- 1 | |
| java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.FilterInputStream.read(FilterInputStream.java:71) at org.hiero.base.io.streams.AugmentedDataInputStream.read(AugmentedDataInputStream.java:57) at com.swirlds.platform.network.communication.states.WaitForAcceptReject.transition(WaitForAcceptReject.java:48) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) | |||||||||
| node4 | 7m 57.194s | 2025-09-24 14:05:44.746 | 3180 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith1 4 to 1>> | NetworkUtils: | Connection broken: 4 <- 1 | |
| java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.FilterInputStream.read(FilterInputStream.java:71) at org.hiero.base.io.streams.AugmentedDataInputStream.read(AugmentedDataInputStream.java:57) at com.swirlds.platform.network.communication.states.WaitForAcceptReject.transition(WaitForAcceptReject.java:48) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) | |||||||||