| node2 | 0.000ns | 2025-10-19 05:45:02.213 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node2 | 89.000ms | 2025-10-19 05:45:02.302 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node2 | 105.000ms | 2025-10-19 05:45:02.318 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 215.000ms | 2025-10-19 05:45:02.428 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [2] are set to run locally | |
| node2 | 241.000ms | 2025-10-19 05:45:02.454 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node0 | 355.000ms | 2025-10-19 05:45:02.568 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node0 | 441.000ms | 2025-10-19 05:45:02.654 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node0 | 456.000ms | 2025-10-19 05:45:02.669 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 493.000ms | 2025-10-19 05:45:02.706 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node3 | 544.000ms | 2025-10-19 05:45:02.757 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node0 | 566.000ms | 2025-10-19 05:45:02.779 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [0] are set to run locally | |
| node4 | 585.000ms | 2025-10-19 05:45:02.798 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node0 | 592.000ms | 2025-10-19 05:45:02.805 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node4 | 602.000ms | 2025-10-19 05:45:02.815 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node3 | 635.000ms | 2025-10-19 05:45:02.848 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node3 | 652.000ms | 2025-10-19 05:45:02.865 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 720.000ms | 2025-10-19 05:45:02.933 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 749.000ms | 2025-10-19 05:45:02.962 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node3 | 770.000ms | 2025-10-19 05:45:02.983 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [3] are set to run locally | |
| node3 | 799.000ms | 2025-10-19 05:45:03.012 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node1 | 1.246s | 2025-10-19 05:45:03.459 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node1 | 1.370s | 2025-10-19 05:45:03.583 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node2 | 1.386s | 2025-10-19 05:45:03.599 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1143ms | |
| node1 | 1.391s | 2025-10-19 05:45:03.604 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 1.397s | 2025-10-19 05:45:03.610 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node2 | 1.401s | 2025-10-19 05:45:03.614 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 1.449s | 2025-10-19 05:45:03.662 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node2 | 1.529s | 2025-10-19 05:45:03.742 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node2 | 1.530s | 2025-10-19 05:45:03.743 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node1 | 1.536s | 2025-10-19 05:45:03.749 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [1] are set to run locally | |
| node1 | 1.566s | 2025-10-19 05:45:03.779 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node0 | 1.812s | 2025-10-19 05:45:04.025 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1219ms | |
| node0 | 1.820s | 2025-10-19 05:45:04.033 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node0 | 1.823s | 2025-10-19 05:45:04.036 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 1.855s | 2025-10-19 05:45:04.068 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node0 | 1.922s | 2025-10-19 05:45:04.135 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node0 | 1.924s | 2025-10-19 05:45:04.137 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 2.144s | 2025-10-19 05:45:04.357 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1394ms | |
| node4 | 2.154s | 2025-10-19 05:45:04.367 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 2.157s | 2025-10-19 05:45:04.370 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 2.197s | 2025-10-19 05:45:04.410 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node3 | 2.221s | 2025-10-19 05:45:04.434 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1421ms | |
| node3 | 2.232s | 2025-10-19 05:45:04.445 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node3 | 2.236s | 2025-10-19 05:45:04.449 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 2.260s | 2025-10-19 05:45:04.473 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 2.261s | 2025-10-19 05:45:04.474 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node3 | 2.280s | 2025-10-19 05:45:04.493 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node3 | 2.350s | 2025-10-19 05:45:04.563 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node3 | 2.351s | 2025-10-19 05:45:04.564 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node1 | 3.161s | 2025-10-19 05:45:05.374 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1594ms | |
| node1 | 3.170s | 2025-10-19 05:45:05.383 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node1 | 3.173s | 2025-10-19 05:45:05.386 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 3.210s | 2025-10-19 05:45:05.423 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node1 | 3.272s | 2025-10-19 05:45:05.485 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node1 | 3.273s | 2025-10-19 05:45:05.486 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node2 | 3.563s | 2025-10-19 05:45:05.776 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node2 | 3.645s | 2025-10-19 05:45:05.858 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 3.647s | 2025-10-19 05:45:05.860 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node2 | 3.685s | 2025-10-19 05:45:05.898 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 3.951s | 2025-10-19 05:45:06.164 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node0 | 4.038s | 2025-10-19 05:45:06.251 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.040s | 2025-10-19 05:45:06.253 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node0 | 4.078s | 2025-10-19 05:45:06.291 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 4.292s | 2025-10-19 05:45:06.505 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 4.383s | 2025-10-19 05:45:06.596 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 4.385s | 2025-10-19 05:45:06.598 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node3 | 4.397s | 2025-10-19 05:45:06.610 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node2 | 4.420s | 2025-10-19 05:45:06.633 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 4.422s | 2025-10-19 05:45:06.635 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 4.425s | 2025-10-19 05:45:06.638 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node2 | 4.427s | 2025-10-19 05:45:06.640 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node2 | 4.436s | 2025-10-19 05:45:06.649 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 4.437s | 2025-10-19 05:45:06.650 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 4.499s | 2025-10-19 05:45:06.712 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 4.502s | 2025-10-19 05:45:06.715 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node3 | 4.541s | 2025-10-19 05:45:06.754 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 4.848s | 2025-10-19 05:45:07.061 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.850s | 2025-10-19 05:45:07.063 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node0 | 4.856s | 2025-10-19 05:45:07.069 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node0 | 4.864s | 2025-10-19 05:45:07.077 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.866s | 2025-10-19 05:45:07.079 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5.232s | 2025-10-19 05:45:07.445 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5.234s | 2025-10-19 05:45:07.447 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 5.240s | 2025-10-19 05:45:07.453 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node4 | 5.250s | 2025-10-19 05:45:07.463 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5.252s | 2025-10-19 05:45:07.465 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 5.318s | 2025-10-19 05:45:07.531 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node3 | 5.369s | 2025-10-19 05:45:07.582 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 5.371s | 2025-10-19 05:45:07.584 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node3 | 5.378s | 2025-10-19 05:45:07.591 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node3 | 5.389s | 2025-10-19 05:45:07.602 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 5.392s | 2025-10-19 05:45:07.605 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 5.426s | 2025-10-19 05:45:07.639 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 5.428s | 2025-10-19 05:45:07.641 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node1 | 5.470s | 2025-10-19 05:45:07.683 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node2 | 5.538s | 2025-10-19 05:45:07.751 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26236663] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=138230, randomLong=-6596105471456695818, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=16520, randomLong=3706033624217615886, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1074540, data=35, exception=null] OS Health Check Report - Complete (took 1022 ms) | |||||||||
| node2 | 5.568s | 2025-10-19 05:45:07.781 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node2 | 5.575s | 2025-10-19 05:45:07.788 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node2 | 5.577s | 2025-10-19 05:45:07.790 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node2 | 5.654s | 2025-10-19 05:45:07.867 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih1M7w==", "port": 30124 }, { "ipAddressV4": "CoAAcQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhtuGA==", "port": 30125 }, { "ipAddressV4": "CoAAag==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Ijk8wQ==", "port": 30126 }, { "ipAddressV4": "CoAAaw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IixI7A==", "port": 30127 }, { "ipAddressV4": "CoAAdQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+G/4Q==", "port": 30128 }, { "ipAddressV4": "CoAAbA==", "port": 30128 }] }] } | |||||||||
| node2 | 5.676s | 2025-10-19 05:45:07.889 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv | |
| node2 | 5.677s | 2025-10-19 05:45:07.890 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node2 | 5.688s | 2025-10-19 05:45:07.901 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: b9bf2c6533c0aff6d626f6b266614d5c1a779197a94970b7a7b1f07b4dc0d16a065cc1d10f7d3f394f4a47f7ecebe42a (root) VirtualMap state / sight-million-job-access | |||||||||
| node2 | 5.903s | 2025-10-19 05:45:08.116 | 40 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node2 | 5.907s | 2025-10-19 05:45:08.120 | 41 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node2 | 5.913s | 2025-10-19 05:45:08.126 | 42 | INFO | STARTUP | <<start-node-2>> | ConsistencyTestingToolMain: | init called in Main for node 2. | |
| node2 | 5.914s | 2025-10-19 05:45:08.127 | 43 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | Starting platform 2 | |
| node2 | 5.915s | 2025-10-19 05:45:08.128 | 44 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node2 | 5.918s | 2025-10-19 05:45:08.131 | 45 | INFO | STARTUP | <<start-node-2>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node2 | 5.919s | 2025-10-19 05:45:08.132 | 46 | INFO | STARTUP | <<start-node-2>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node2 | 5.919s | 2025-10-19 05:45:08.132 | 47 | INFO | STARTUP | <<start-node-2>> | InputWireChecks: | All input wires have been bound. | |
| node2 | 5.921s | 2025-10-19 05:45:08.134 | 48 | WARN | STARTUP | <<start-node-2>> | PcesFileTracker: | No preconsensus event files available | |
| node2 | 5.921s | 2025-10-19 05:45:08.134 | 49 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node2 | 5.923s | 2025-10-19 05:45:08.136 | 50 | INFO | STARTUP | <<start-node-2>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node2 | 5.924s | 2025-10-19 05:45:08.137 | 51 | INFO | STARTUP | <<app: appMain 2>> | ConsistencyTestingToolMain: | run called in Main. | |
| node2 | 5.926s | 2025-10-19 05:45:08.139 | 52 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 184.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node2 | 5.930s | 2025-10-19 05:45:08.143 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node0 | 5.973s | 2025-10-19 05:45:08.186 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26345170] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=286970, randomLong=-4013611472659356013, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=15660, randomLong=-7808645273280548358, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1404180, data=35, exception=null] OS Health Check Report - Complete (took 1025 ms) | |||||||||
| node0 | 6.005s | 2025-10-19 05:45:08.218 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node0 | 6.012s | 2025-10-19 05:45:08.225 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node0 | 6.016s | 2025-10-19 05:45:08.229 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node0 | 6.105s | 2025-10-19 05:45:08.318 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih1M7w==", "port": 30124 }, { "ipAddressV4": "CoAAcQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhtuGA==", "port": 30125 }, { "ipAddressV4": "CoAAag==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Ijk8wQ==", "port": 30126 }, { "ipAddressV4": "CoAAaw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IixI7A==", "port": 30127 }, { "ipAddressV4": "CoAAdQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+G/4Q==", "port": 30128 }, { "ipAddressV4": "CoAAbA==", "port": 30128 }] }] } | |||||||||
| node0 | 6.129s | 2025-10-19 05:45:08.342 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv | |
| node0 | 6.130s | 2025-10-19 05:45:08.343 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node0 | 6.144s | 2025-10-19 05:45:08.357 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: b9bf2c6533c0aff6d626f6b266614d5c1a779197a94970b7a7b1f07b4dc0d16a065cc1d10f7d3f394f4a47f7ecebe42a (root) VirtualMap state / sight-million-job-access | |||||||||
| node1 | 6.337s | 2025-10-19 05:45:08.550 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 6.339s | 2025-10-19 05:45:08.552 | 26 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node1 | 6.345s | 2025-10-19 05:45:08.558 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node1 | 6.357s | 2025-10-19 05:45:08.570 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 6.361s | 2025-10-19 05:45:08.574 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 6.386s | 2025-10-19 05:45:08.599 | 40 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node4 | 6.387s | 2025-10-19 05:45:08.600 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26277067] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=219060, randomLong=-809688318715343210, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11340, randomLong=2375310599874383933, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1510940, data=35, exception=null] OS Health Check Report - Complete (took 1028 ms) | |||||||||
| node0 | 6.391s | 2025-10-19 05:45:08.604 | 41 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node0 | 6.398s | 2025-10-19 05:45:08.611 | 42 | INFO | STARTUP | <<start-node-0>> | ConsistencyTestingToolMain: | init called in Main for node 0. | |
| node0 | 6.399s | 2025-10-19 05:45:08.612 | 43 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | Starting platform 0 | |
| node0 | 6.400s | 2025-10-19 05:45:08.613 | 44 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node0 | 6.404s | 2025-10-19 05:45:08.617 | 45 | INFO | STARTUP | <<start-node-0>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node0 | 6.405s | 2025-10-19 05:45:08.618 | 46 | INFO | STARTUP | <<start-node-0>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node0 | 6.406s | 2025-10-19 05:45:08.619 | 47 | INFO | STARTUP | <<start-node-0>> | InputWireChecks: | All input wires have been bound. | |
| node0 | 6.407s | 2025-10-19 05:45:08.620 | 48 | WARN | STARTUP | <<start-node-0>> | PcesFileTracker: | No preconsensus event files available | |
| node0 | 6.408s | 2025-10-19 05:45:08.621 | 49 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node0 | 6.410s | 2025-10-19 05:45:08.623 | 50 | INFO | STARTUP | <<start-node-0>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node0 | 6.412s | 2025-10-19 05:45:08.625 | 51 | INFO | STARTUP | <<app: appMain 0>> | ConsistencyTestingToolMain: | run called in Main. | |
| node0 | 6.414s | 2025-10-19 05:45:08.627 | 52 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 208.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node0 | 6.420s | 2025-10-19 05:45:08.633 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node4 | 6.423s | 2025-10-19 05:45:08.636 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 6.432s | 2025-10-19 05:45:08.645 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 6.434s | 2025-10-19 05:45:08.647 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 6.515s | 2025-10-19 05:45:08.728 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih1M7w==", "port": 30124 }, { "ipAddressV4": "CoAAcQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhtuGA==", "port": 30125 }, { "ipAddressV4": "CoAAag==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Ijk8wQ==", "port": 30126 }, { "ipAddressV4": "CoAAaw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IixI7A==", "port": 30127 }, { "ipAddressV4": "CoAAdQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+G/4Q==", "port": 30128 }, { "ipAddressV4": "CoAAbA==", "port": 30128 }] }] } | |||||||||
| node3 | 6.523s | 2025-10-19 05:45:08.736 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26188270] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=227869, randomLong=-1942683485953604788, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=14530, randomLong=2525810676053703248, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1616813, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms) | |||||||||
| node4 | 6.541s | 2025-10-19 05:45:08.754 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 6.542s | 2025-10-19 05:45:08.755 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node3 | 6.555s | 2025-10-19 05:45:08.768 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 6.555s | 2025-10-19 05:45:08.768 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: b9bf2c6533c0aff6d626f6b266614d5c1a779197a94970b7a7b1f07b4dc0d16a065cc1d10f7d3f394f4a47f7ecebe42a (root) VirtualMap state / sight-million-job-access | |||||||||
| node3 | 6.563s | 2025-10-19 05:45:08.776 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node3 | 6.566s | 2025-10-19 05:45:08.779 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node3 | 6.648s | 2025-10-19 05:45:08.861 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih1M7w==", "port": 30124 }, { "ipAddressV4": "CoAAcQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhtuGA==", "port": 30125 }, { "ipAddressV4": "CoAAag==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Ijk8wQ==", "port": 30126 }, { "ipAddressV4": "CoAAaw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IixI7A==", "port": 30127 }, { "ipAddressV4": "CoAAdQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+G/4Q==", "port": 30128 }, { "ipAddressV4": "CoAAbA==", "port": 30128 }] }] } | |||||||||
| node3 | 6.672s | 2025-10-19 05:45:08.885 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv | |
| node3 | 6.673s | 2025-10-19 05:45:08.886 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node3 | 6.685s | 2025-10-19 05:45:08.898 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: b9bf2c6533c0aff6d626f6b266614d5c1a779197a94970b7a7b1f07b4dc0d16a065cc1d10f7d3f394f4a47f7ecebe42a (root) VirtualMap state / sight-million-job-access | |||||||||
| node4 | 6.754s | 2025-10-19 05:45:08.967 | 40 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node4 | 6.759s | 2025-10-19 05:45:08.972 | 41 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node4 | 6.765s | 2025-10-19 05:45:08.978 | 42 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 6.766s | 2025-10-19 05:45:08.979 | 43 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 6.767s | 2025-10-19 05:45:08.980 | 44 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 6.770s | 2025-10-19 05:45:08.983 | 45 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 6.771s | 2025-10-19 05:45:08.984 | 46 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 6.772s | 2025-10-19 05:45:08.985 | 47 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 6.774s | 2025-10-19 05:45:08.987 | 48 | WARN | STARTUP | <<start-node-4>> | PcesFileTracker: | No preconsensus event files available | |
| node4 | 6.774s | 2025-10-19 05:45:08.987 | 49 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node4 | 6.776s | 2025-10-19 05:45:08.989 | 50 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node4 | 6.777s | 2025-10-19 05:45:08.990 | 51 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 6.781s | 2025-10-19 05:45:08.994 | 52 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 167.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 6.785s | 2025-10-19 05:45:08.998 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node3 | 6.897s | 2025-10-19 05:45:09.110 | 40 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node3 | 6.901s | 2025-10-19 05:45:09.114 | 41 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node3 | 6.908s | 2025-10-19 05:45:09.121 | 42 | INFO | STARTUP | <<start-node-3>> | ConsistencyTestingToolMain: | init called in Main for node 3. | |
| node3 | 6.909s | 2025-10-19 05:45:09.122 | 43 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | Starting platform 3 | |
| node3 | 6.910s | 2025-10-19 05:45:09.123 | 44 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node3 | 6.914s | 2025-10-19 05:45:09.127 | 45 | INFO | STARTUP | <<start-node-3>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node3 | 6.915s | 2025-10-19 05:45:09.128 | 46 | INFO | STARTUP | <<start-node-3>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node3 | 6.915s | 2025-10-19 05:45:09.128 | 47 | INFO | STARTUP | <<start-node-3>> | InputWireChecks: | All input wires have been bound. | |
| node3 | 6.917s | 2025-10-19 05:45:09.130 | 48 | WARN | STARTUP | <<start-node-3>> | PcesFileTracker: | No preconsensus event files available | |
| node3 | 6.917s | 2025-10-19 05:45:09.130 | 49 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node3 | 6.920s | 2025-10-19 05:45:09.133 | 50 | INFO | STARTUP | <<start-node-3>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node3 | 6.921s | 2025-10-19 05:45:09.134 | 51 | INFO | STARTUP | <<app: appMain 3>> | ConsistencyTestingToolMain: | run called in Main. | |
| node3 | 6.923s | 2025-10-19 05:45:09.136 | 52 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 177.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node3 | 6.929s | 2025-10-19 05:45:09.142 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node1 | 7.478s | 2025-10-19 05:45:09.691 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26233294] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=143590, randomLong=323301811343891665, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11389, randomLong=-7041282181624090568, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1259139, data=35, exception=null] OS Health Check Report - Complete (took 1026 ms) | |||||||||
| node1 | 7.510s | 2025-10-19 05:45:09.723 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node1 | 7.518s | 2025-10-19 05:45:09.731 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node1 | 7.521s | 2025-10-19 05:45:09.734 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node1 | 7.604s | 2025-10-19 05:45:09.817 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih1M7w==", "port": 30124 }, { "ipAddressV4": "CoAAcQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhtuGA==", "port": 30125 }, { "ipAddressV4": "CoAAag==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Ijk8wQ==", "port": 30126 }, { "ipAddressV4": "CoAAaw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IixI7A==", "port": 30127 }, { "ipAddressV4": "CoAAdQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+G/4Q==", "port": 30128 }, { "ipAddressV4": "CoAAbA==", "port": 30128 }] }] } | |||||||||
| node1 | 7.630s | 2025-10-19 05:45:09.843 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv | |
| node1 | 7.631s | 2025-10-19 05:45:09.844 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node1 | 7.644s | 2025-10-19 05:45:09.857 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: b9bf2c6533c0aff6d626f6b266614d5c1a779197a94970b7a7b1f07b4dc0d16a065cc1d10f7d3f394f4a47f7ecebe42a (root) VirtualMap state / sight-million-job-access | |||||||||
| node1 | 7.894s | 2025-10-19 05:45:10.107 | 40 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node1 | 7.900s | 2025-10-19 05:45:10.113 | 41 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node1 | 7.906s | 2025-10-19 05:45:10.119 | 42 | INFO | STARTUP | <<start-node-1>> | ConsistencyTestingToolMain: | init called in Main for node 1. | |
| node1 | 7.907s | 2025-10-19 05:45:10.120 | 43 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | Starting platform 1 | |
| node1 | 7.909s | 2025-10-19 05:45:10.122 | 44 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node1 | 7.913s | 2025-10-19 05:45:10.126 | 45 | INFO | STARTUP | <<start-node-1>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node1 | 7.914s | 2025-10-19 05:45:10.127 | 46 | INFO | STARTUP | <<start-node-1>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node1 | 7.915s | 2025-10-19 05:45:10.128 | 47 | INFO | STARTUP | <<start-node-1>> | InputWireChecks: | All input wires have been bound. | |
| node1 | 7.916s | 2025-10-19 05:45:10.129 | 48 | WARN | STARTUP | <<start-node-1>> | PcesFileTracker: | No preconsensus event files available | |
| node1 | 7.917s | 2025-10-19 05:45:10.130 | 49 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node1 | 7.918s | 2025-10-19 05:45:10.131 | 50 | INFO | STARTUP | <<start-node-1>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node1 | 7.920s | 2025-10-19 05:45:10.133 | 51 | INFO | STARTUP | <<app: appMain 1>> | ConsistencyTestingToolMain: | run called in Main. | |
| node1 | 7.923s | 2025-10-19 05:45:10.136 | 52 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 219.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node1 | 7.928s | 2025-10-19 05:45:10.141 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node2 | 8.926s | 2025-10-19 05:45:11.139 | 54 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ] | |
| node2 | 8.927s | 2025-10-19 05:45:11.140 | 55 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node0 | 9.411s | 2025-10-19 05:45:11.624 | 54 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ] | |
| node0 | 9.413s | 2025-10-19 05:45:11.626 | 55 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 9.780s | 2025-10-19 05:45:11.993 | 54 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 9.782s | 2025-10-19 05:45:11.995 | 55 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node3 | 9.921s | 2025-10-19 05:45:12.134 | 54 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ] | |
| node3 | 9.923s | 2025-10-19 05:45:12.136 | 55 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node1 | 10.921s | 2025-10-19 05:45:13.134 | 54 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ] | |
| node1 | 10.924s | 2025-10-19 05:45:13.137 | 55 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node2 | 16.022s | 2025-10-19 05:45:18.235 | 56 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 16.507s | 2025-10-19 05:45:18.720 | 56 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node4 | 16.873s | 2025-10-19 05:45:19.086 | 56 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node3 | 17.017s | 2025-10-19 05:45:19.230 | 56 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 17.988s | 2025-10-19 05:45:20.201 | 58 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node3 | 18.006s | 2025-10-19 05:45:20.219 | 58 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node1 | 18.017s | 2025-10-19 05:45:20.230 | 56 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node2 | 18.031s | 2025-10-19 05:45:20.244 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 2.0 s in CHECKING. Now in ACTIVE | |
| node2 | 18.034s | 2025-10-19 05:45:20.247 | 59 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node4 | 18.041s | 2025-10-19 05:45:20.254 | 58 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node1 | 18.080s | 2025-10-19 05:45:20.293 | 58 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node2 | 18.179s | 2025-10-19 05:45:20.392 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 | |
| node2 | 18.181s | 2025-10-19 05:45:20.394 | 75 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node4 | 18.290s | 2025-10-19 05:45:20.503 | 73 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |
| node4 | 18.291s | 2025-10-19 05:45:20.504 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node1 | 18.307s | 2025-10-19 05:45:20.520 | 73 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 | |
| node1 | 18.310s | 2025-10-19 05:45:20.523 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node3 | 18.352s | 2025-10-19 05:45:20.565 | 73 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 | |
| node3 | 18.354s | 2025-10-19 05:45:20.567 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node0 | 18.375s | 2025-10-19 05:45:20.588 | 73 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 | |
| node0 | 18.377s | 2025-10-19 05:45:20.590 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node2 | 18.414s | 2025-10-19 05:45:20.627 | 107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node2 | 18.417s | 2025-10-19 05:45:20.630 | 108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-10-19T05:45:18.890450015Z Next consensus number: 1 Legacy running event hash: 61f64dcf011a0560126ddfc54a3e252f86dcceb53a2ac7c2d32b7b4a1836ed42e5de41655f41c4f73173dad2887d2e15 Legacy running event mnemonic: disagree-rigid-cash-project Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: d25aa38d87001c0a137b57aec62af56fe9a9bd13b503ec4a59d9534e159d5ea08a21689e483be1f24cc35bd207fbd2e5 (root) VirtualMap state / regular-mobile-cactus-life | |||||||||
| node3 | 18.432s | 2025-10-19 05:45:20.645 | 90 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 1.4 s in CHECKING. Now in ACTIVE | |
| node2 | 18.453s | 2025-10-19 05:45:20.666 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 18.454s | 2025-10-19 05:45:20.667 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 18.455s | 2025-10-19 05:45:20.668 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 18.456s | 2025-10-19 05:45:20.669 | 90 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 1.9 s in CHECKING. Now in ACTIVE | |
| node2 | 18.456s | 2025-10-19 05:45:20.669 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 18.462s | 2025-10-19 05:45:20.675 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 18.495s | 2025-10-19 05:45:20.708 | 107 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 1.6 s in CHECKING. Now in ACTIVE | |
| node1 | 18.549s | 2025-10-19 05:45:20.762 | 107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node1 | 18.553s | 2025-10-19 05:45:20.766 | 108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-10-19T05:45:18.890450015Z Next consensus number: 1 Legacy running event hash: 61f64dcf011a0560126ddfc54a3e252f86dcceb53a2ac7c2d32b7b4a1836ed42e5de41655f41c4f73173dad2887d2e15 Legacy running event mnemonic: disagree-rigid-cash-project Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: d25aa38d87001c0a137b57aec62af56fe9a9bd13b503ec4a59d9534e159d5ea08a21689e483be1f24cc35bd207fbd2e5 (root) VirtualMap state / regular-mobile-cactus-life | |||||||||
| node4 | 18.568s | 2025-10-19 05:45:20.781 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node4 | 18.571s | 2025-10-19 05:45:20.784 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-10-19T05:45:18.890450015Z Next consensus number: 1 Legacy running event hash: 61f64dcf011a0560126ddfc54a3e252f86dcceb53a2ac7c2d32b7b4a1836ed42e5de41655f41c4f73173dad2887d2e15 Legacy running event mnemonic: disagree-rigid-cash-project Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: d25aa38d87001c0a137b57aec62af56fe9a9bd13b503ec4a59d9534e159d5ea08a21689e483be1f24cc35bd207fbd2e5 (root) VirtualMap state / regular-mobile-cactus-life | |||||||||
| node1 | 18.589s | 2025-10-19 05:45:20.802 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 18.590s | 2025-10-19 05:45:20.803 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 18.590s | 2025-10-19 05:45:20.803 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 18.591s | 2025-10-19 05:45:20.804 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 18.597s | 2025-10-19 05:45:20.810 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 18.601s | 2025-10-19 05:45:20.814 | 107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node3 | 18.605s | 2025-10-19 05:45:20.818 | 108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-10-19T05:45:18.890450015Z Next consensus number: 1 Legacy running event hash: 61f64dcf011a0560126ddfc54a3e252f86dcceb53a2ac7c2d32b7b4a1836ed42e5de41655f41c4f73173dad2887d2e15 Legacy running event mnemonic: disagree-rigid-cash-project Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: d25aa38d87001c0a137b57aec62af56fe9a9bd13b503ec4a59d9534e159d5ea08a21689e483be1f24cc35bd207fbd2e5 (root) VirtualMap state / regular-mobile-cactus-life | |||||||||
| node4 | 18.611s | 2025-10-19 05:45:20.824 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 18.613s | 2025-10-19 05:45:20.826 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 18.613s | 2025-10-19 05:45:20.826 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 18.614s | 2025-10-19 05:45:20.827 | 108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node4 | 18.614s | 2025-10-19 05:45:20.827 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 18.617s | 2025-10-19 05:45:20.830 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-10-19T05:45:18.890450015Z Next consensus number: 1 Legacy running event hash: 61f64dcf011a0560126ddfc54a3e252f86dcceb53a2ac7c2d32b7b4a1836ed42e5de41655f41c4f73173dad2887d2e15 Legacy running event mnemonic: disagree-rigid-cash-project Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: d25aa38d87001c0a137b57aec62af56fe9a9bd13b503ec4a59d9534e159d5ea08a21689e483be1f24cc35bd207fbd2e5 (root) VirtualMap state / regular-mobile-cactus-life | |||||||||
| node4 | 18.621s | 2025-10-19 05:45:20.834 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 18.645s | 2025-10-19 05:45:20.858 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 18.645s | 2025-10-19 05:45:20.858 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 18.646s | 2025-10-19 05:45:20.859 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 18.647s | 2025-10-19 05:45:20.860 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 18.654s | 2025-10-19 05:45:20.867 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 18.661s | 2025-10-19 05:45:20.874 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 18.662s | 2025-10-19 05:45:20.875 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 18.662s | 2025-10-19 05:45:20.875 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 18.663s | 2025-10-19 05:45:20.876 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 18.670s | 2025-10-19 05:45:20.883 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 19.419s | 2025-10-19 05:45:21.632 | 135 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | StatusStateMachine: | Platform spent 1.4 s in CHECKING. Now in ACTIVE | |
| node3 | 59.252s | 2025-10-19 05:46:01.465 | 1110 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 92 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 59.264s | 2025-10-19 05:46:01.477 | 1099 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 92 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 59.298s | 2025-10-19 05:46:01.511 | 1072 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 92 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 59.303s | 2025-10-19 05:46:01.516 | 1095 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 92 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 59.366s | 2025-10-19 05:46:01.579 | 1089 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 92 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 59.453s | 2025-10-19 05:46:01.666 | 1113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 92 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/92 | |
| node3 | 59.454s | 2025-10-19 05:46:01.667 | 1114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node0 | 59.509s | 2025-10-19 05:46:01.722 | 1098 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 92 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/92 | |
| node0 | 59.510s | 2025-10-19 05:46:01.723 | 1099 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node2 | 59.520s | 2025-10-19 05:46:01.733 | 1075 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 92 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/92 | |
| node2 | 59.520s | 2025-10-19 05:46:01.733 | 1076 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node3 | 59.538s | 2025-10-19 05:46:01.751 | 1153 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node3 | 59.541s | 2025-10-19 05:46:01.754 | 1154 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 92 Timestamp: 2025-10-19T05:46:00.477766Z Next consensus number: 3285 Legacy running event hash: 1c138d74fd75933d5f550c2f8af7d27550a8428fefac95c42aa2170bd6f153dbf55abd6a57a9fb18d29203dab5b302f1 Legacy running event mnemonic: list-face-fabric-observe Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1254291272 Root hash: d168871a73c118bb8d4697134c5070246ede481845d164435577d1f33257f72f4687743a0f0c5bc509b4a74a4ef83d6b (root) VirtualMap state / spread-broom-join-blast | |||||||||
| node3 | 59.552s | 2025-10-19 05:46:01.765 | 1155 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 59.553s | 2025-10-19 05:46:01.766 | 1156 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 64 File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 59.553s | 2025-10-19 05:46:01.766 | 1157 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 59.556s | 2025-10-19 05:46:01.769 | 1158 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 59.557s | 2025-10-19 05:46:01.770 | 1159 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 92 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/92 {"round":92,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/92/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 59.578s | 2025-10-19 05:46:01.791 | 1112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 92 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/92 | |
| node1 | 59.579s | 2025-10-19 05:46:01.792 | 1113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node0 | 59.592s | 2025-10-19 05:46:01.805 | 1130 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node0 | 59.595s | 2025-10-19 05:46:01.808 | 1131 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 92 Timestamp: 2025-10-19T05:46:00.477766Z Next consensus number: 3285 Legacy running event hash: 1c138d74fd75933d5f550c2f8af7d27550a8428fefac95c42aa2170bd6f153dbf55abd6a57a9fb18d29203dab5b302f1 Legacy running event mnemonic: list-face-fabric-observe Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1254291272 Root hash: d168871a73c118bb8d4697134c5070246ede481845d164435577d1f33257f72f4687743a0f0c5bc509b4a74a4ef83d6b (root) VirtualMap state / spread-broom-join-blast | |||||||||
| node2 | 59.598s | 2025-10-19 05:46:01.811 | 1113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node2 | 59.600s | 2025-10-19 05:46:01.813 | 1114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 92 Timestamp: 2025-10-19T05:46:00.477766Z Next consensus number: 3285 Legacy running event hash: 1c138d74fd75933d5f550c2f8af7d27550a8428fefac95c42aa2170bd6f153dbf55abd6a57a9fb18d29203dab5b302f1 Legacy running event mnemonic: list-face-fabric-observe Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1254291272 Root hash: d168871a73c118bb8d4697134c5070246ede481845d164435577d1f33257f72f4687743a0f0c5bc509b4a74a4ef83d6b (root) VirtualMap state / spread-broom-join-blast | |||||||||
| node4 | 59.600s | 2025-10-19 05:46:01.813 | 1102 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 92 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/92 | |
| node4 | 59.601s | 2025-10-19 05:46:01.814 | 1103 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node0 | 59.605s | 2025-10-19 05:46:01.818 | 1132 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 59.606s | 2025-10-19 05:46:01.819 | 1133 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 64 File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 59.606s | 2025-10-19 05:46:01.819 | 1134 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 59.609s | 2025-10-19 05:46:01.822 | 1143 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 59.609s | 2025-10-19 05:46:01.822 | 1115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 59.609s | 2025-10-19 05:46:01.822 | 1116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 64 File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 59.609s | 2025-10-19 05:46:01.822 | 1117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 59.610s | 2025-10-19 05:46:01.823 | 1144 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 92 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/92 {"round":92,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/92/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 59.612s | 2025-10-19 05:46:01.825 | 1118 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 59.613s | 2025-10-19 05:46:01.826 | 1119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 92 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/92 {"round":92,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/92/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 59.677s | 2025-10-19 05:46:01.890 | 1144 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node1 | 59.680s | 2025-10-19 05:46:01.893 | 1145 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 92 Timestamp: 2025-10-19T05:46:00.477766Z Next consensus number: 3285 Legacy running event hash: 1c138d74fd75933d5f550c2f8af7d27550a8428fefac95c42aa2170bd6f153dbf55abd6a57a9fb18d29203dab5b302f1 Legacy running event mnemonic: list-face-fabric-observe Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1254291272 Root hash: d168871a73c118bb8d4697134c5070246ede481845d164435577d1f33257f72f4687743a0f0c5bc509b4a74a4ef83d6b (root) VirtualMap state / spread-broom-join-blast | |||||||||
| node4 | 59.685s | 2025-10-19 05:46:01.898 | 1134 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 92 | |
| node4 | 59.687s | 2025-10-19 05:46:01.900 | 1135 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 92 Timestamp: 2025-10-19T05:46:00.477766Z Next consensus number: 3285 Legacy running event hash: 1c138d74fd75933d5f550c2f8af7d27550a8428fefac95c42aa2170bd6f153dbf55abd6a57a9fb18d29203dab5b302f1 Legacy running event mnemonic: list-face-fabric-observe Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1254291272 Root hash: d168871a73c118bb8d4697134c5070246ede481845d164435577d1f33257f72f4687743a0f0c5bc509b4a74a4ef83d6b (root) VirtualMap state / spread-broom-join-blast | |||||||||
| node1 | 59.693s | 2025-10-19 05:46:01.906 | 1146 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 59.694s | 2025-10-19 05:46:01.907 | 1147 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 64 File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 59.694s | 2025-10-19 05:46:01.907 | 1148 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 59.698s | 2025-10-19 05:46:01.911 | 1149 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 59.698s | 2025-10-19 05:46:01.911 | 1136 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 59.699s | 2025-10-19 05:46:01.912 | 1150 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 92 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/92 {"round":92,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/92/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 59.699s | 2025-10-19 05:46:01.912 | 1137 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 64 File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 59.699s | 2025-10-19 05:46:01.912 | 1138 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 59.703s | 2025-10-19 05:46:01.916 | 1139 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 59.704s | 2025-10-19 05:46:01.917 | 1140 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 92 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/92 {"round":92,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/92/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 1m 58.816s | 2025-10-19 05:47:01.029 | 2577 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 225 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 1m 58.863s | 2025-10-19 05:47:01.076 | 2636 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 225 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 1m 58.899s | 2025-10-19 05:47:01.112 | 2592 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 225 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 1m 58.908s | 2025-10-19 05:47:01.121 | 2617 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 225 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 1m 58.913s | 2025-10-19 05:47:01.126 | 2608 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 225 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 1m 59.054s | 2025-10-19 05:47:01.267 | 2620 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 225 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/225 | |
| node3 | 1m 59.055s | 2025-10-19 05:47:01.268 | 2621 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node1 | 1m 59.066s | 2025-10-19 05:47:01.279 | 2639 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 225 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/225 | |
| node1 | 1m 59.067s | 2025-10-19 05:47:01.280 | 2640 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node2 | 1m 59.126s | 2025-10-19 05:47:01.339 | 2580 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 225 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/225 | |
| node2 | 1m 59.127s | 2025-10-19 05:47:01.340 | 2581 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node0 | 1m 59.137s | 2025-10-19 05:47:01.350 | 2611 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 225 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/225 | |
| node0 | 1m 59.137s | 2025-10-19 05:47:01.350 | 2612 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node3 | 1m 59.142s | 2025-10-19 05:47:01.355 | 2660 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node3 | 1m 59.144s | 2025-10-19 05:47:01.357 | 2661 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 225 Timestamp: 2025-10-19T05:47:00.089395941Z Next consensus number: 8057 Legacy running event hash: ddd98818de8cc6511caf206f368a509f5d248393fcab2fd67a0523c353764304a189c81149d05709ee60434493796ee8 Legacy running event mnemonic: eager-wonder-company-curious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1934861529 Root hash: 02c79215b7e06520a2263a7a25396b3b9010e7a14d022d8056b65fafb0d0c7ae73cf56769be441bfd350feb2d617d80e (root) VirtualMap state / flight-runway-advice-piano | |||||||||
| node4 | 1m 59.145s | 2025-10-19 05:47:01.358 | 2595 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 225 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/225 | |
| node4 | 1m 59.146s | 2025-10-19 05:47:01.359 | 2596 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node1 | 1m 59.152s | 2025-10-19 05:47:01.365 | 2679 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node3 | 1m 59.152s | 2025-10-19 05:47:01.365 | 2662 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 59.152s | 2025-10-19 05:47:01.365 | 2663 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 198 File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 59.152s | 2025-10-19 05:47:01.365 | 2664 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 1m 59.155s | 2025-10-19 05:47:01.368 | 2680 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 225 Timestamp: 2025-10-19T05:47:00.089395941Z Next consensus number: 8057 Legacy running event hash: ddd98818de8cc6511caf206f368a509f5d248393fcab2fd67a0523c353764304a189c81149d05709ee60434493796ee8 Legacy running event mnemonic: eager-wonder-company-curious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1934861529 Root hash: 02c79215b7e06520a2263a7a25396b3b9010e7a14d022d8056b65fafb0d0c7ae73cf56769be441bfd350feb2d617d80e (root) VirtualMap state / flight-runway-advice-piano | |||||||||
| node3 | 1m 59.158s | 2025-10-19 05:47:01.371 | 2665 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 1m 59.158s | 2025-10-19 05:47:01.371 | 2666 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 225 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/225 {"round":225,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/225/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 1m 59.163s | 2025-10-19 05:47:01.376 | 2681 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 59.163s | 2025-10-19 05:47:01.376 | 2682 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 198 File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 59.163s | 2025-10-19 05:47:01.376 | 2683 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 1m 59.169s | 2025-10-19 05:47:01.382 | 2684 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 1m 59.169s | 2025-10-19 05:47:01.382 | 2685 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 225 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/225 {"round":225,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/225/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 1m 59.212s | 2025-10-19 05:47:01.425 | 2612 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node2 | 1m 59.213s | 2025-10-19 05:47:01.426 | 2613 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 225 Timestamp: 2025-10-19T05:47:00.089395941Z Next consensus number: 8057 Legacy running event hash: ddd98818de8cc6511caf206f368a509f5d248393fcab2fd67a0523c353764304a189c81149d05709ee60434493796ee8 Legacy running event mnemonic: eager-wonder-company-curious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1934861529 Root hash: 02c79215b7e06520a2263a7a25396b3b9010e7a14d022d8056b65fafb0d0c7ae73cf56769be441bfd350feb2d617d80e (root) VirtualMap state / flight-runway-advice-piano | |||||||||
| node0 | 1m 59.215s | 2025-10-19 05:47:01.428 | 2651 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node0 | 1m 59.217s | 2025-10-19 05:47:01.430 | 2652 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 225 Timestamp: 2025-10-19T05:47:00.089395941Z Next consensus number: 8057 Legacy running event hash: ddd98818de8cc6511caf206f368a509f5d248393fcab2fd67a0523c353764304a189c81149d05709ee60434493796ee8 Legacy running event mnemonic: eager-wonder-company-curious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1934861529 Root hash: 02c79215b7e06520a2263a7a25396b3b9010e7a14d022d8056b65fafb0d0c7ae73cf56769be441bfd350feb2d617d80e (root) VirtualMap state / flight-runway-advice-piano | |||||||||
| node2 | 1m 59.222s | 2025-10-19 05:47:01.435 | 2614 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 59.223s | 2025-10-19 05:47:01.436 | 2615 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 198 File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 59.223s | 2025-10-19 05:47:01.436 | 2616 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 1m 59.226s | 2025-10-19 05:47:01.439 | 2653 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 59.227s | 2025-10-19 05:47:01.440 | 2654 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 198 File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 59.227s | 2025-10-19 05:47:01.440 | 2655 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 1m 59.229s | 2025-10-19 05:47:01.442 | 2617 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 1m 59.229s | 2025-10-19 05:47:01.442 | 2618 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 225 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/225 {"round":225,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/225/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 1m 59.230s | 2025-10-19 05:47:01.443 | 2635 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 225 | |
| node4 | 1m 59.232s | 2025-10-19 05:47:01.445 | 2636 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 225 Timestamp: 2025-10-19T05:47:00.089395941Z Next consensus number: 8057 Legacy running event hash: ddd98818de8cc6511caf206f368a509f5d248393fcab2fd67a0523c353764304a189c81149d05709ee60434493796ee8 Legacy running event mnemonic: eager-wonder-company-curious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1934861529 Root hash: 02c79215b7e06520a2263a7a25396b3b9010e7a14d022d8056b65fafb0d0c7ae73cf56769be441bfd350feb2d617d80e (root) VirtualMap state / flight-runway-advice-piano | |||||||||
| node0 | 1m 59.233s | 2025-10-19 05:47:01.446 | 2656 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 1m 59.234s | 2025-10-19 05:47:01.447 | 2657 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 225 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/225 {"round":225,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/225/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 1m 59.241s | 2025-10-19 05:47:01.454 | 2637 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 59.241s | 2025-10-19 05:47:01.454 | 2638 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 198 File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 59.241s | 2025-10-19 05:47:01.454 | 2639 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 1m 59.247s | 2025-10-19 05:47:01.460 | 2640 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 1m 59.248s | 2025-10-19 05:47:01.461 | 2641 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 225 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/225 {"round":225,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/225/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 2m 59.467s | 2025-10-19 05:48:01.680 | 4042 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 356 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 2m 59.564s | 2025-10-19 05:48:01.777 | 4027 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 356 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 2m 59.613s | 2025-10-19 05:48:01.826 | 4091 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 356 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 2m 59.674s | 2025-10-19 05:48:01.887 | 4105 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 356 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 2m 59.753s | 2025-10-19 05:48:01.966 | 4094 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 356 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 2m 59.830s | 2025-10-19 05:48:02.043 | 4040 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 356 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/356 | |
| node4 | 2m 59.831s | 2025-10-19 05:48:02.044 | 4041 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node3 | 2m 59.839s | 2025-10-19 05:48:02.052 | 4097 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 356 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/356 | |
| node3 | 2m 59.840s | 2025-10-19 05:48:02.053 | 4098 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node2 | 2m 59.890s | 2025-10-19 05:48:02.103 | 4045 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 356 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/356 | |
| node2 | 2m 59.891s | 2025-10-19 05:48:02.104 | 4046 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node1 | 2m 59.892s | 2025-10-19 05:48:02.105 | 4108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 356 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/356 | |
| node1 | 2m 59.893s | 2025-10-19 05:48:02.106 | 4109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node4 | 2m 59.919s | 2025-10-19 05:48:02.132 | 4072 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node4 | 2m 59.922s | 2025-10-19 05:48:02.135 | 4073 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 356 Timestamp: 2025-10-19T05:48:00.306310Z Next consensus number: 12868 Legacy running event hash: 4c00330340e18e912d507ced0fca18d29bd5de8300b9d755f27a123cbaf23576b6eb60fb95f1911dc7f81b80f452ee7d Legacy running event mnemonic: length-gather-bring-combine Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2071327893 Root hash: f50a13ba522c77ab657d77b102e72d8925e0118d3c3d0e1263f633ff6cc2ec3edcae75235acdf70e4ca974515519daf9 (root) VirtualMap state / gain-jar-easily-voice | |||||||||
| node3 | 2m 59.927s | 2025-10-19 05:48:02.140 | 4137 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node3 | 2m 59.929s | 2025-10-19 05:48:02.142 | 4138 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 356 Timestamp: 2025-10-19T05:48:00.306310Z Next consensus number: 12868 Legacy running event hash: 4c00330340e18e912d507ced0fca18d29bd5de8300b9d755f27a123cbaf23576b6eb60fb95f1911dc7f81b80f452ee7d Legacy running event mnemonic: length-gather-bring-combine Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2071327893 Root hash: f50a13ba522c77ab657d77b102e72d8925e0118d3c3d0e1263f633ff6cc2ec3edcae75235acdf70e4ca974515519daf9 (root) VirtualMap state / gain-jar-easily-voice | |||||||||
| node4 | 2m 59.930s | 2025-10-19 05:48:02.143 | 4074 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 59.930s | 2025-10-19 05:48:02.143 | 4075 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 328 File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 59.930s | 2025-10-19 05:48:02.143 | 4076 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 2m 59.936s | 2025-10-19 05:48:02.149 | 4139 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 59.936s | 2025-10-19 05:48:02.149 | 4140 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 328 File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 59.936s | 2025-10-19 05:48:02.149 | 4141 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 2m 59.939s | 2025-10-19 05:48:02.152 | 4077 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 2m 59.940s | 2025-10-19 05:48:02.153 | 4078 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 356 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/356 {"round":356,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/356/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 2m 59.946s | 2025-10-19 05:48:02.159 | 4142 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 2m 59.946s | 2025-10-19 05:48:02.159 | 4143 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 356 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/356 {"round":356,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/356/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 2m 59.962s | 2025-10-19 05:48:02.175 | 4094 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 356 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/356 | |
| node0 | 2m 59.963s | 2025-10-19 05:48:02.176 | 4095 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node2 | 2m 59.970s | 2025-10-19 05:48:02.183 | 4090 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node2 | 2m 59.972s | 2025-10-19 05:48:02.185 | 4091 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 356 Timestamp: 2025-10-19T05:48:00.306310Z Next consensus number: 12868 Legacy running event hash: 4c00330340e18e912d507ced0fca18d29bd5de8300b9d755f27a123cbaf23576b6eb60fb95f1911dc7f81b80f452ee7d Legacy running event mnemonic: length-gather-bring-combine Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2071327893 Root hash: f50a13ba522c77ab657d77b102e72d8925e0118d3c3d0e1263f633ff6cc2ec3edcae75235acdf70e4ca974515519daf9 (root) VirtualMap state / gain-jar-easily-voice | |||||||||
| node2 | 2m 59.979s | 2025-10-19 05:48:02.192 | 4092 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 59.979s | 2025-10-19 05:48:02.192 | 4093 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 328 File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 59.979s | 2025-10-19 05:48:02.192 | 4094 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 2m 59.980s | 2025-10-19 05:48:02.193 | 4148 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node1 | 2m 59.983s | 2025-10-19 05:48:02.196 | 4149 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 356 Timestamp: 2025-10-19T05:48:00.306310Z Next consensus number: 12868 Legacy running event hash: 4c00330340e18e912d507ced0fca18d29bd5de8300b9d755f27a123cbaf23576b6eb60fb95f1911dc7f81b80f452ee7d Legacy running event mnemonic: length-gather-bring-combine Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2071327893 Root hash: f50a13ba522c77ab657d77b102e72d8925e0118d3c3d0e1263f633ff6cc2ec3edcae75235acdf70e4ca974515519daf9 (root) VirtualMap state / gain-jar-easily-voice | |||||||||
| node2 | 2m 59.988s | 2025-10-19 05:48:02.201 | 4095 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 2m 59.989s | 2025-10-19 05:48:02.202 | 4096 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 356 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/356 {"round":356,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/356/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 2m 59.993s | 2025-10-19 05:48:02.206 | 4150 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 59.994s | 2025-10-19 05:48:02.207 | 4151 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 328 File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 59.994s | 2025-10-19 05:48:02.207 | 4152 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 3.000m | 2025-10-19 05:48:02.216 | 4153 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 3.000m | 2025-10-19 05:48:02.217 | 4154 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 356 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/356 {"round":356,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/356/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 3.001m | 2025-10-19 05:48:02.249 | 4134 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 356 | |
| node0 | 3.001m | 2025-10-19 05:48:02.251 | 4135 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 356 Timestamp: 2025-10-19T05:48:00.306310Z Next consensus number: 12868 Legacy running event hash: 4c00330340e18e912d507ced0fca18d29bd5de8300b9d755f27a123cbaf23576b6eb60fb95f1911dc7f81b80f452ee7d Legacy running event mnemonic: length-gather-bring-combine Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2071327893 Root hash: f50a13ba522c77ab657d77b102e72d8925e0118d3c3d0e1263f633ff6cc2ec3edcae75235acdf70e4ca974515519daf9 (root) VirtualMap state / gain-jar-easily-voice | |||||||||
| node0 | 3.001m | 2025-10-19 05:48:02.257 | 4136 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3.001m | 2025-10-19 05:48:02.257 | 4137 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 328 File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3.001m | 2025-10-19 05:48:02.257 | 4138 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 3.001m | 2025-10-19 05:48:02.268 | 4142 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 3.001m | 2025-10-19 05:48:02.269 | 4143 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 356 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/356 {"round":356,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/356/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 3m 11.179s | 2025-10-19 05:48:13.392 | 4420 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 1 to 4>> | NetworkUtils: | Connection broken: 1 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-19T05:48:13.389752832Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:384) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:428) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node2 | 3m 11.181s | 2025-10-19 05:48:13.394 | 4357 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 2 to 4>> | NetworkUtils: | Connection broken: 2 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-19T05:48:13.389040048Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:384) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:428) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node0 | 3m 11.182s | 2025-10-19 05:48:13.395 | 4392 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 0 to 4>> | NetworkUtils: | Connection broken: 0 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-19T05:48:13.390010585Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:384) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:428) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node3 | 3m 11.184s | 2025-10-19 05:48:13.397 | 4403 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 3 to 4>> | NetworkUtils: | Connection broken: 3 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-19T05:48:13.393562240Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:384) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:428) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node1 | 3m 58.732s | 2025-10-19 05:49:00.945 | 5798 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 492 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 3m 58.736s | 2025-10-19 05:49:00.949 | 5571 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 492 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 3m 58.753s | 2025-10-19 05:49:00.966 | 5638 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 492 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 3m 58.812s | 2025-10-19 05:49:01.025 | 5617 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 492 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 3m 58.938s | 2025-10-19 05:49:01.151 | 5574 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 492 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/492 | |
| node2 | 3m 58.939s | 2025-10-19 05:49:01.152 | 5575 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 492 | |
| node1 | 3m 58.944s | 2025-10-19 05:49:01.157 | 5801 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 492 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/492 | |
| node1 | 3m 58.945s | 2025-10-19 05:49:01.158 | 5802 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 492 | |
| node3 | 3m 58.946s | 2025-10-19 05:49:01.159 | 5620 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 492 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/492 | |
| node3 | 3m 58.946s | 2025-10-19 05:49:01.159 | 5621 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 492 | |
| node0 | 3m 59.014s | 2025-10-19 05:49:01.227 | 5641 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 492 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/492 | |
| node0 | 3m 59.015s | 2025-10-19 05:49:01.228 | 5642 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 492 | |
| node2 | 3m 59.016s | 2025-10-19 05:49:01.229 | 5606 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 492 | |
| node2 | 3m 59.018s | 2025-10-19 05:49:01.231 | 5607 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 492 Timestamp: 2025-10-19T05:49:00.077770Z Next consensus number: 16499 Legacy running event hash: 955fdd3eda221a00efb1e9f504d785fa8d27705a77c6c2eb564ceb4ef349094d41f3b587179a345746df1508b5234825 Legacy running event mnemonic: feature-flame-rookie-divorce Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1084990293 Root hash: 9e78ad083e66affe4034fdd728d27fafefd95020dac36a94b71668b0bb13c695b202e4635d365e73802ab0e76475aa70 (root) VirtualMap state / country-gallery-smoke-able | |||||||||
| node2 | 3m 59.024s | 2025-10-19 05:49:01.237 | 5608 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 59.024s | 2025-10-19 05:49:01.237 | 5609 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 465 File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 59.024s | 2025-10-19 05:49:01.237 | 5610 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 3m 59.027s | 2025-10-19 05:49:01.240 | 5833 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 492 | |
| node3 | 3m 59.027s | 2025-10-19 05:49:01.240 | 5652 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 492 | |
| node1 | 3m 59.029s | 2025-10-19 05:49:01.242 | 5834 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 492 Timestamp: 2025-10-19T05:49:00.077770Z Next consensus number: 16499 Legacy running event hash: 955fdd3eda221a00efb1e9f504d785fa8d27705a77c6c2eb564ceb4ef349094d41f3b587179a345746df1508b5234825 Legacy running event mnemonic: feature-flame-rookie-divorce Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1084990293 Root hash: 9e78ad083e66affe4034fdd728d27fafefd95020dac36a94b71668b0bb13c695b202e4635d365e73802ab0e76475aa70 (root) VirtualMap state / country-gallery-smoke-able | |||||||||
| node3 | 3m 59.029s | 2025-10-19 05:49:01.242 | 5653 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 492 Timestamp: 2025-10-19T05:49:00.077770Z Next consensus number: 16499 Legacy running event hash: 955fdd3eda221a00efb1e9f504d785fa8d27705a77c6c2eb564ceb4ef349094d41f3b587179a345746df1508b5234825 Legacy running event mnemonic: feature-flame-rookie-divorce Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1084990293 Root hash: 9e78ad083e66affe4034fdd728d27fafefd95020dac36a94b71668b0bb13c695b202e4635d365e73802ab0e76475aa70 (root) VirtualMap state / country-gallery-smoke-able | |||||||||
| node1 | 3m 59.036s | 2025-10-19 05:49:01.249 | 5835 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 59.036s | 2025-10-19 05:49:01.249 | 5836 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 465 File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 59.036s | 2025-10-19 05:49:01.249 | 5837 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 3m 59.036s | 2025-10-19 05:49:01.249 | 5611 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 3m 59.036s | 2025-10-19 05:49:01.249 | 5612 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 492 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/492 {"round":492,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/492/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 3m 59.037s | 2025-10-19 05:49:01.250 | 5654 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 59.037s | 2025-10-19 05:49:01.250 | 5655 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 465 File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 59.037s | 2025-10-19 05:49:01.250 | 5656 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 3m 59.047s | 2025-10-19 05:49:01.260 | 5838 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 3m 59.048s | 2025-10-19 05:49:01.261 | 5839 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 492 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/492 {"round":492,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/492/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 3m 59.049s | 2025-10-19 05:49:01.262 | 5657 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 3m 59.050s | 2025-10-19 05:49:01.263 | 5658 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 492 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/492 {"round":492,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/492/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 3m 59.096s | 2025-10-19 05:49:01.309 | 5673 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 492 | |
| node0 | 3m 59.098s | 2025-10-19 05:49:01.311 | 5674 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 492 Timestamp: 2025-10-19T05:49:00.077770Z Next consensus number: 16499 Legacy running event hash: 955fdd3eda221a00efb1e9f504d785fa8d27705a77c6c2eb564ceb4ef349094d41f3b587179a345746df1508b5234825 Legacy running event mnemonic: feature-flame-rookie-divorce Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1084990293 Root hash: 9e78ad083e66affe4034fdd728d27fafefd95020dac36a94b71668b0bb13c695b202e4635d365e73802ab0e76475aa70 (root) VirtualMap state / country-gallery-smoke-able | |||||||||
| node0 | 3m 59.105s | 2025-10-19 05:49:01.318 | 5675 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 59.106s | 2025-10-19 05:49:01.319 | 5676 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 465 File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 59.106s | 2025-10-19 05:49:01.319 | 5677 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 3m 59.117s | 2025-10-19 05:49:01.330 | 5678 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 3m 59.118s | 2025-10-19 05:49:01.331 | 5679 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 492 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/492 {"round":492,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/492/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 4m 59.052s | 2025-10-19 05:50:01.265 | 7449 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 631 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 4m 59.203s | 2025-10-19 05:50:01.416 | 7188 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 631 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 4m 59.253s | 2025-10-19 05:50:01.466 | 7166 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 631 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 4m 59.314s | 2025-10-19 05:50:01.527 | 7339 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 631 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 4m 59.330s | 2025-10-19 05:50:01.543 | 7169 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 631 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/631 | |
| node2 | 4m 59.331s | 2025-10-19 05:50:01.544 | 7170 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 631 | |
| node0 | 4m 59.360s | 2025-10-19 05:50:01.573 | 7342 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 631 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/631 | |
| node0 | 4m 59.361s | 2025-10-19 05:50:01.574 | 7343 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 631 | |
| node2 | 4m 59.408s | 2025-10-19 05:50:01.621 | 7201 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 631 | |
| node2 | 4m 59.409s | 2025-10-19 05:50:01.622 | 7202 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 631 Timestamp: 2025-10-19T05:50:00.384989225Z Next consensus number: 19838 Legacy running event hash: 8bd37cb8b95d6f901c469a8398306f8d829ddf88e598bfca45405ce72bda950ead1663e2f338beaf1439a7e64428c21c Legacy running event mnemonic: mad-novel-stamp-velvet Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1300932515 Root hash: f18981a0b414ab890b296dc1b1af43f4d197a449ada6c884c29f8a61f112a10fa399657718c011db7f5246d16e0b140b (root) VirtualMap state / solve-slot-laugh-moment | |||||||||
| node2 | 4m 59.417s | 2025-10-19 05:50:01.630 | 7203 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+49+04.918183583Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 4m 59.417s | 2025-10-19 05:50:01.630 | 7204 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 604 File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+49+04.918183583Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 4m 59.417s | 2025-10-19 05:50:01.630 | 7205 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 4m 59.419s | 2025-10-19 05:50:01.632 | 7206 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 4m 59.420s | 2025-10-19 05:50:01.633 | 7207 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 631 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/631 {"round":631,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/631/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 4m 59.421s | 2025-10-19 05:50:01.634 | 7208 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 | |
| node0 | 4m 59.436s | 2025-10-19 05:50:01.649 | 7382 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 631 | |
| node0 | 4m 59.438s | 2025-10-19 05:50:01.651 | 7383 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 631 Timestamp: 2025-10-19T05:50:00.384989225Z Next consensus number: 19838 Legacy running event hash: 8bd37cb8b95d6f901c469a8398306f8d829ddf88e598bfca45405ce72bda950ead1663e2f338beaf1439a7e64428c21c Legacy running event mnemonic: mad-novel-stamp-velvet Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1300932515 Root hash: f18981a0b414ab890b296dc1b1af43f4d197a449ada6c884c29f8a61f112a10fa399657718c011db7f5246d16e0b140b (root) VirtualMap state / solve-slot-laugh-moment | |||||||||
| node0 | 4m 59.444s | 2025-10-19 05:50:01.657 | 7384 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+49+04.923731026Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 4m 59.444s | 2025-10-19 05:50:01.657 | 7385 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 604 File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+49+04.923731026Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 4m 59.445s | 2025-10-19 05:50:01.658 | 7386 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 4m 59.447s | 2025-10-19 05:50:01.660 | 7387 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 4m 59.447s | 2025-10-19 05:50:01.660 | 7388 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 631 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/631 {"round":631,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/631/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 4m 59.449s | 2025-10-19 05:50:01.662 | 7389 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 | |
| node1 | 4m 59.470s | 2025-10-19 05:50:01.683 | 7452 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 631 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/631 | |
| node1 | 4m 59.470s | 2025-10-19 05:50:01.683 | 7453 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 631 | |
| node3 | 4m 59.547s | 2025-10-19 05:50:01.760 | 7191 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 631 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/631 | |
| node3 | 4m 59.548s | 2025-10-19 05:50:01.761 | 7192 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 631 | |
| node1 | 4m 59.555s | 2025-10-19 05:50:01.768 | 7497 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 631 | |
| node1 | 4m 59.557s | 2025-10-19 05:50:01.770 | 7498 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 631 Timestamp: 2025-10-19T05:50:00.384989225Z Next consensus number: 19838 Legacy running event hash: 8bd37cb8b95d6f901c469a8398306f8d829ddf88e598bfca45405ce72bda950ead1663e2f338beaf1439a7e64428c21c Legacy running event mnemonic: mad-novel-stamp-velvet Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1300932515 Root hash: f18981a0b414ab890b296dc1b1af43f4d197a449ada6c884c29f8a61f112a10fa399657718c011db7f5246d16e0b140b (root) VirtualMap state / solve-slot-laugh-moment | |||||||||
| node1 | 4m 59.564s | 2025-10-19 05:50:01.777 | 7499 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+49+04.922298107Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 4m 59.564s | 2025-10-19 05:50:01.777 | 7500 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 604 File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+49+04.922298107Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 4m 59.565s | 2025-10-19 05:50:01.778 | 7501 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 4m 59.567s | 2025-10-19 05:50:01.780 | 7502 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 4m 59.568s | 2025-10-19 05:50:01.781 | 7503 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 631 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/631 {"round":631,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/631/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 4m 59.569s | 2025-10-19 05:50:01.782 | 7504 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 | |
| node3 | 4m 59.633s | 2025-10-19 05:50:01.846 | 7223 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 631 | |
| node3 | 4m 59.635s | 2025-10-19 05:50:01.848 | 7224 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 631 Timestamp: 2025-10-19T05:50:00.384989225Z Next consensus number: 19838 Legacy running event hash: 8bd37cb8b95d6f901c469a8398306f8d829ddf88e598bfca45405ce72bda950ead1663e2f338beaf1439a7e64428c21c Legacy running event mnemonic: mad-novel-stamp-velvet Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1300932515 Root hash: f18981a0b414ab890b296dc1b1af43f4d197a449ada6c884c29f8a61f112a10fa399657718c011db7f5246d16e0b140b (root) VirtualMap state / solve-slot-laugh-moment | |||||||||
| node3 | 4m 59.642s | 2025-10-19 05:50:01.855 | 7228 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+49+04.982801688Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 4m 59.642s | 2025-10-19 05:50:01.855 | 7229 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 604 File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+49+04.982801688Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 4m 59.643s | 2025-10-19 05:50:01.856 | 7230 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 4m 59.645s | 2025-10-19 05:50:01.858 | 7231 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 4m 59.646s | 2025-10-19 05:50:01.859 | 7232 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 631 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/631 {"round":631,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/631/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 4m 59.647s | 2025-10-19 05:50:01.860 | 7233 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 | |
| node4 | 5m 49.668s | 2025-10-19 05:50:51.881 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node4 | 5m 49.764s | 2025-10-19 05:50:51.977 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node4 | 5m 49.781s | 2025-10-19 05:50:51.994 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 5m 49.905s | 2025-10-19 05:50:52.118 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 5m 49.940s | 2025-10-19 05:50:52.153 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node4 | 5m 51.399s | 2025-10-19 05:50:53.612 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1457ms | |
| node4 | 5m 51.408s | 2025-10-19 05:50:53.621 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 5m 51.412s | 2025-10-19 05:50:53.625 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 5m 51.450s | 2025-10-19 05:50:53.663 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 5m 51.515s | 2025-10-19 05:50:53.728 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 5m 51.516s | 2025-10-19 05:50:53.729 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 5m 53.575s | 2025-10-19 05:50:55.788 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 5m 53.676s | 2025-10-19 05:50:55.889 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 53.683s | 2025-10-19 05:50:55.896 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | The following saved states were found on disk: | |
| - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/356/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/225/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/92/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/SignedState.swh | |||||||||
| node4 | 5m 53.684s | 2025-10-19 05:50:55.897 | 17 | INFO | STARTUP | <main> | StartupStateUtils: | Loading latest state from disk. | |
| node4 | 5m 53.684s | 2025-10-19 05:50:55.897 | 18 | INFO | STARTUP | <main> | StartupStateUtils: | Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/356/SignedState.swh | |
| node4 | 5m 53.694s | 2025-10-19 05:50:55.907 | 19 | INFO | STATE_TO_DISK | <main> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp | |
| node4 | 5m 53.808s | 2025-10-19 05:50:56.021 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 5m 54.640s | 2025-10-19 05:50:56.853 | 31 | INFO | STARTUP | <main> | StartupStateUtils: | Loaded state's hash is the same as when it was saved. | |
| node4 | 5m 54.645s | 2025-10-19 05:50:56.858 | 32 | INFO | STARTUP | <main> | StartupStateUtils: | Platform has loaded a saved state {"round":356,"consensusTimestamp":"2025-10-19T05:48:00.306310Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload] | |
| node4 | 5m 54.649s | 2025-10-19 05:50:56.862 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 54.650s | 2025-10-19 05:50:56.863 | 38 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 5m 54.655s | 2025-10-19 05:50:56.868 | 39 | INFO | STARTUP | <main> | AddressBookInitializer: | Using the loaded state's address book and weight values. | |
| node4 | 5m 54.663s | 2025-10-19 05:50:56.876 | 40 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 54.665s | 2025-10-19 05:50:56.878 | 41 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 55.762s | 2025-10-19 05:50:57.975 | 42 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26030989] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=229240, randomLong=1158819027672502351, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=12491, randomLong=-3714263284397225870, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1488789, data=35, exception=null] OS Health Check Report - Complete (took 1031 ms) | |||||||||
| node4 | 5m 55.803s | 2025-10-19 05:50:58.016 | 43 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 5m 55.936s | 2025-10-19 05:50:58.149 | 44 | INFO | STARTUP | <main> | PcesUtilities: | Span compaction completed for data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 381 | |
| node4 | 5m 55.940s | 2025-10-19 05:50:58.153 | 45 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 5m 55.944s | 2025-10-19 05:50:58.157 | 46 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 5m 56.051s | 2025-10-19 05:50:58.264 | 47 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih1M7w==", "port": 30124 }, { "ipAddressV4": "CoAAcQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IhtuGA==", "port": 30125 }, { "ipAddressV4": "CoAAag==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Ijk8wQ==", "port": 30126 }, { "ipAddressV4": "CoAAaw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IixI7A==", "port": 30127 }, { "ipAddressV4": "CoAAdQ==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "I+G/4Q==", "port": 30128 }, { "ipAddressV4": "CoAAbA==", "port": 30128 }] }] } | |||||||||
| node4 | 5m 56.085s | 2025-10-19 05:50:58.298 | 48 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with state long 8229584949575041558. | |
| node4 | 5m 56.087s | 2025-10-19 05:50:58.300 | 49 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with 356 rounds handled. | |
| node4 | 5m 56.087s | 2025-10-19 05:50:58.300 | 50 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 5m 56.088s | 2025-10-19 05:50:58.301 | 51 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 5m 56.138s | 2025-10-19 05:50:58.351 | 52 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 356 Timestamp: 2025-10-19T05:48:00.306310Z Next consensus number: 12868 Legacy running event hash: 4c00330340e18e912d507ced0fca18d29bd5de8300b9d755f27a123cbaf23576b6eb60fb95f1911dc7f81b80f452ee7d Legacy running event mnemonic: length-gather-bring-combine Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 2071327893 Root hash: f50a13ba522c77ab657d77b102e72d8925e0118d3c3d0e1263f633ff6cc2ec3edcae75235acdf70e4ca974515519daf9 (root) VirtualMap state / gain-jar-easily-voice | |||||||||
| node4 | 5m 56.387s | 2025-10-19 05:50:58.600 | 54 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 4c00330340e18e912d507ced0fca18d29bd5de8300b9d755f27a123cbaf23576b6eb60fb95f1911dc7f81b80f452ee7d | |
| node4 | 5m 56.402s | 2025-10-19 05:50:58.615 | 55 | INFO | STARTUP | <platformForkJoinThread-4> | Shadowgraph: | Shadowgraph starting from expiration threshold 328 | |
| node4 | 5m 56.409s | 2025-10-19 05:50:58.622 | 57 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 5m 56.410s | 2025-10-19 05:50:58.623 | 58 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 5m 56.411s | 2025-10-19 05:50:58.624 | 59 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 5m 56.416s | 2025-10-19 05:50:58.629 | 60 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 5m 56.417s | 2025-10-19 05:50:58.630 | 61 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 5m 56.418s | 2025-10-19 05:50:58.631 | 62 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 5m 56.421s | 2025-10-19 05:50:58.634 | 63 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 328 | |
| node4 | 5m 56.426s | 2025-10-19 05:50:58.639 | 64 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 207.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 5m 56.696s | 2025-10-19 05:50:58.909 | 65 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:f638e1ab5049 BR:354), num remaining: 3 | |
| node4 | 5m 56.697s | 2025-10-19 05:50:58.910 | 66 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:8d81e5d710b6 BR:354), num remaining: 2 | |
| node4 | 5m 56.697s | 2025-10-19 05:50:58.910 | 67 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:ab752cdc622f BR:354), num remaining: 1 | |
| node4 | 5m 56.698s | 2025-10-19 05:50:58.911 | 68 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:40a189ddefb5 BR:354), num remaining: 0 | |
| node4 | 5m 56.901s | 2025-10-19 05:50:59.114 | 216 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 1,962 preconsensus events with max birth round 381. These events contained 2,723 transactions. 24 rounds reached consensus spanning 11.1 seconds of consensus time. The latest round to reach consensus is round 380. Replay took 480.0 milliseconds. | |
| node4 | 5m 56.905s | 2025-10-19 05:50:59.118 | 217 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 5m 56.906s | 2025-10-19 05:50:59.119 | 218 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 478.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node0 | 5m 57.805s | 2025-10-19 05:51:00.018 | 8895 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 0 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=766,ancientThreshold=739,expiredThreshold=665] remote ev=EventWindow[latestConsensusRound=380,ancientThreshold=353,expiredThreshold=328] | |
| node1 | 5m 57.806s | 2025-10-19 05:51:00.019 | 9117 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=766,ancientThreshold=739,expiredThreshold=665] remote ev=EventWindow[latestConsensusRound=380,ancientThreshold=353,expiredThreshold=328] | |
| node3 | 5m 57.806s | 2025-10-19 05:51:00.019 | 8720 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 3 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=766,ancientThreshold=739,expiredThreshold=665] remote ev=EventWindow[latestConsensusRound=380,ancientThreshold=353,expiredThreshold=328] | |
| node2 | 5m 57.832s | 2025-10-19 05:51:00.045 | 8746 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=766,ancientThreshold=739,expiredThreshold=665] remote ev=EventWindow[latestConsensusRound=380,ancientThreshold=353,expiredThreshold=328] | |
| node4 | 5m 57.877s | 2025-10-19 05:51:00.090 | 290 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 969.0 ms in OBSERVING. Now in BEHIND | |
| node4 | 5m 57.877s | 2025-10-19 05:51:00.090 | 289 | INFO | RECONNECT | <<platform-core: SyncProtocolWith0 4 to 0>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=380,ancientThreshold=353,expiredThreshold=328] remote ev=EventWindow[latestConsensusRound=766,ancientThreshold=739,expiredThreshold=665] | |
| node4 | 5m 57.877s | 2025-10-19 05:51:00.090 | 291 | INFO | RECONNECT | <<platform-core: SyncProtocolWith3 4 to 3>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=380,ancientThreshold=353,expiredThreshold=328] remote ev=EventWindow[latestConsensusRound=766,ancientThreshold=739,expiredThreshold=665] | |
| node4 | 5m 57.877s | 2025-10-19 05:51:00.090 | 292 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=380,ancientThreshold=353,expiredThreshold=328] remote ev=EventWindow[latestConsensusRound=766,ancientThreshold=739,expiredThreshold=665] | |
| node4 | 5m 57.878s | 2025-10-19 05:51:00.091 | 293 | INFO | RECONNECT | <platformForkJoinThread-1> | ReconnectController: | Starting ReconnectController | |
| node4 | 5m 57.879s | 2025-10-19 05:51:00.092 | 294 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Preparing for reconnect, stopping gossip | |
| node4 | 5m 57.901s | 2025-10-19 05:51:00.114 | 295 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=380,ancientThreshold=353,expiredThreshold=328] remote ev=EventWindow[latestConsensusRound=766,ancientThreshold=739,expiredThreshold=665] | |
| node4 | 5m 58.031s | 2025-10-19 05:51:00.244 | 296 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Preparing for reconnect, start clearing queues | |
| node4 | 5m 58.032s | 2025-10-19 05:51:00.245 | 297 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Queues have been cleared | |
| node4 | 5m 58.034s | 2025-10-19 05:51:00.247 | 298 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectController: | waiting for reconnect connection | |
| node4 | 5m 58.034s | 2025-10-19 05:51:00.247 | 299 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectController: | acquired reconnect connection | |
| node2 | 5m 58.127s | 2025-10-19 05:51:00.340 | 8758 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Starting reconnect in the role of the sender {"receiving":false,"nodeId":2,"otherNodeId":4,"round":766} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node2 | 5m 58.128s | 2025-10-19 05:51:00.341 | 8759 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | The following state will be sent to the learner: | |
| Round: 766 Timestamp: 2025-10-19T05:50:58.927746Z Next consensus number: 23075 Legacy running event hash: d796a0df7e154b1432c6e1097d37aff054c7c04e7fe89648b83f72561dc18f646bd072eb07fbef7d25fd5bfa6431518d Legacy running event mnemonic: ask-tomorrow-section-glad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1617987221 Root hash: 6ca1caf019762757ef0b646b8ffe46aca6e4d24901f303cbaa90ed341f45443fc79a8a7c276ba81dfe6888b04e9b118f (root) VirtualMap state / neither-pear-conduct-turtle | |||||||||
| node2 | 5m 58.128s | 2025-10-19 05:51:00.341 | 8760 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Sending signatures from nodes 1, 2, 3 (signing weight = 37500000000/50000000000) for state hash 6ca1caf019762757ef0b646b8ffe46aca6e4d24901f303cbaa90ed341f45443fc79a8a7c276ba81dfe6888b04e9b118f | |
| node2 | 5m 58.128s | 2025-10-19 05:51:00.341 | 8761 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Starting synchronization in the role of the sender. | |
| node4 | 5m 58.194s | 2025-10-19 05:51:00.407 | 300 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":380} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node4 | 5m 58.196s | 2025-10-19 05:51:00.409 | 301 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Receiving signed state signatures | |
| node4 | 5m 58.199s | 2025-10-19 05:51:00.412 | 302 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Received signatures from nodes 1, 2, 3 | |
| node2 | 5m 58.250s | 2025-10-19 05:51:00.463 | 8777 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | TeachingSynchronizer: | sending tree rooted at com.swirlds.virtualmap.VirtualMap with route [] | |
| node2 | 5m 58.259s | 2025-10-19 05:51:00.472 | 8778 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@4bad92a5 start run() | |
| node4 | 5m 58.406s | 2025-10-19 05:51:00.619 | 329 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls receiveTree() | |
| node4 | 5m 58.406s | 2025-10-19 05:51:00.619 | 330 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | synchronizing tree | |
| node4 | 5m 58.407s | 2025-10-19 05:51:00.620 | 331 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route [] | |
| node4 | 5m 58.415s | 2025-10-19 05:51:00.628 | 332 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@5351467f start run() | |
| node4 | 5m 58.476s | 2025-10-19 05:51:00.689 | 333 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): firstLeafPath: 4 -> 4, lastLeafPath: 8 -> 8 | |
| node4 | 5m 58.476s | 2025-10-19 05:51:00.689 | 334 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): done | |
| node4 | 5m 58.630s | 2025-10-19 05:51:00.843 | 335 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread finished the learning loop for the current subtree | |
| node4 | 5m 58.631s | 2025-10-19 05:51:00.844 | 336 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call nodeRemover.allNodesReceived() | |
| node4 | 5m 58.631s | 2025-10-19 05:51:00.844 | 337 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived() | |
| node4 | 5m 58.632s | 2025-10-19 05:51:00.845 | 338 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived(): done | |
| node4 | 5m 58.632s | 2025-10-19 05:51:00.845 | 339 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call root.endLearnerReconnect() | |
| node4 | 5m 58.632s | 2025-10-19 05:51:00.845 | 340 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call reconnectIterator.close() | |
| node4 | 5m 58.632s | 2025-10-19 05:51:00.845 | 341 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call setHashPrivate() | |
| node4 | 5m 58.656s | 2025-10-19 05:51:00.869 | 351 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call postInit() | |
| node4 | 5m 58.656s | 2025-10-19 05:51:00.869 | 353 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | endLearnerReconnect() complete | |
| node4 | 5m 58.656s | 2025-10-19 05:51:00.869 | 354 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | close() complete | |
| node4 | 5m 58.657s | 2025-10-19 05:51:00.870 | 355 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread closed input, output, and view for the current subtree | |
| node4 | 5m 58.657s | 2025-10-19 05:51:00.870 | 356 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@5351467f finish run() | |
| node4 | 5m 58.658s | 2025-10-19 05:51:00.871 | 357 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | received tree rooted at com.swirlds.virtualmap.VirtualMap with route [] | |
| node4 | 5m 58.659s | 2025-10-19 05:51:00.872 | 358 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | synchronization complete | |
| node4 | 5m 58.660s | 2025-10-19 05:51:00.873 | 359 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls initialize() | |
| node4 | 5m 58.660s | 2025-10-19 05:51:00.873 | 360 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | initializing tree | |
| node4 | 5m 58.660s | 2025-10-19 05:51:00.873 | 361 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | initialization complete | |
| node4 | 5m 58.661s | 2025-10-19 05:51:00.874 | 362 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls hash() | |
| node4 | 5m 58.661s | 2025-10-19 05:51:00.874 | 363 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | hashing tree | |
| node4 | 5m 58.661s | 2025-10-19 05:51:00.874 | 364 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | hashing complete | |
| node4 | 5m 58.662s | 2025-10-19 05:51:00.875 | 365 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls logStatistics() | |
| node4 | 5m 58.665s | 2025-10-19 05:51:00.878 | 366 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | Finished synchronization {"timeInSeconds":0.252,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":9,"leafNodes":5,"redundantLeafNodes":2,"internalNodes":4,"redundantInternalNodes":0} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload] | |
| node4 | 5m 58.666s | 2025-10-19 05:51:00.879 | 367 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | ReconnectMapMetrics: transfersFromTeacher=9; transfersFromLearner=8; internalHashes=3; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=5; leafCleanHashes=2; leafData=5; leafCleanData=2 | |
| node4 | 5m 58.666s | 2025-10-19 05:51:00.879 | 368 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner is done synchronizing | |
| node4 | 5m 58.667s | 2025-10-19 05:51:00.880 | 369 | INFO | STARTUP | <<reconnect: reconnect-controller>> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 5m 58.673s | 2025-10-19 05:51:00.886 | 370 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Reconnect data usage report {"dataMegabytes":0.0058650970458984375} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload] | |
| node2 | 5m 58.691s | 2025-10-19 05:51:00.904 | 8782 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@4bad92a5 finish run() | |
| node2 | 5m 58.692s | 2025-10-19 05:51:00.905 | 8783 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | TeachingSynchronizer: | finished sending tree | |
| node2 | 5m 58.694s | 2025-10-19 05:51:00.907 | 8786 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Finished synchronization in the role of the sender. | |
| node2 | 5m 58.751s | 2025-10-19 05:51:00.964 | 8787 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Finished reconnect in the role of the sender. {"receiving":false,"nodeId":2,"otherNodeId":4,"round":766,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 5m 58.769s | 2025-10-19 05:51:00.982 | 371 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":766,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 5m 58.770s | 2025-10-19 05:51:00.983 | 372 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Information for state received during reconnect: | |
| Round: 766 Timestamp: 2025-10-19T05:50:58.927746Z Next consensus number: 23075 Legacy running event hash: d796a0df7e154b1432c6e1097d37aff054c7c04e7fe89648b83f72561dc18f646bd072eb07fbef7d25fd5bfa6431518d Legacy running event mnemonic: ask-tomorrow-section-glad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1617987221 Root hash: 6ca1caf019762757ef0b646b8ffe46aca6e4d24901f303cbaa90ed341f45443fc79a8a7c276ba81dfe6888b04e9b118f (root) VirtualMap state / neither-pear-conduct-turtle | |||||||||
| node4 | 5m 58.772s | 2025-10-19 05:51:00.985 | 374 | DEBUG | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectStateLoader: | `loadReconnectState` : reloading state | |
| node4 | 5m 58.773s | 2025-10-19 05:51:00.986 | 375 | INFO | STARTUP | <<reconnect: reconnect-controller>> | ConsistencyTestingToolState: | State initialized with state long -1675769155663753583. | |
| node4 | 5m 58.774s | 2025-10-19 05:51:00.987 | 376 | INFO | STARTUP | <<reconnect: reconnect-controller>> | ConsistencyTestingToolState: | State initialized with 766 rounds handled. | |
| node4 | 5m 58.774s | 2025-10-19 05:51:00.987 | 377 | INFO | STARTUP | <<reconnect: reconnect-controller>> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 5m 58.774s | 2025-10-19 05:51:00.987 | 378 | INFO | STARTUP | <<reconnect: reconnect-controller>> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 5m 58.794s | 2025-10-19 05:51:01.007 | 383 | INFO | STATE_TO_DISK | <<reconnect: reconnect-controller>> | DefaultSavedStateController: | Signed state from round 766 created, will eventually be written to disk, for reason: RECONNECT | |
| node4 | 5m 58.795s | 2025-10-19 05:51:01.008 | 384 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 916.0 ms in BEHIND. Now in RECONNECT_COMPLETE | |
| node4 | 5m 58.797s | 2025-10-19 05:51:01.010 | 386 | INFO | STARTUP | <platformForkJoinThread-1> | Shadowgraph: | Shadowgraph starting from expiration threshold 739 | |
| node4 | 5m 58.799s | 2025-10-19 05:51:01.012 | 388 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 766 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/766 | |
| node4 | 5m 58.801s | 2025-10-19 05:51:01.014 | 389 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 766 | |
| node4 | 5m 58.808s | 2025-10-19 05:51:01.021 | 395 | INFO | EVENT_STREAM | <<reconnect: reconnect-controller>> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: d796a0df7e154b1432c6e1097d37aff054c7c04e7fe89648b83f72561dc18f646bd072eb07fbef7d25fd5bfa6431518d | |
| node4 | 5m 58.808s | 2025-10-19 05:51:01.021 | 397 | INFO | STARTUP | <platformForkJoinThread-5> | PcesFileManager: | Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr381_orgn0.pces. All future files will have an origin round of 766. | |
| node2 | 5m 58.897s | 2025-10-19 05:51:01.110 | 8789 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 769 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 5m 58.963s | 2025-10-19 05:51:01.176 | 8752 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 769 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 5m 58.973s | 2025-10-19 05:51:01.186 | 8927 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 769 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 5m 58.973s | 2025-10-19 05:51:01.186 | 9149 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 769 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 5m 58.979s | 2025-10-19 05:51:01.192 | 426 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 766 | |
| node4 | 5m 58.983s | 2025-10-19 05:51:01.196 | 427 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 766 Timestamp: 2025-10-19T05:50:58.927746Z Next consensus number: 23075 Legacy running event hash: d796a0df7e154b1432c6e1097d37aff054c7c04e7fe89648b83f72561dc18f646bd072eb07fbef7d25fd5bfa6431518d Legacy running event mnemonic: ask-tomorrow-section-glad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1617987221 Root hash: 6ca1caf019762757ef0b646b8ffe46aca6e4d24901f303cbaa90ed341f45443fc79a8a7c276ba81dfe6888b04e9b118f (root) VirtualMap state / neither-pear-conduct-turtle | |||||||||
| node4 | 5m 59.022s | 2025-10-19 05:51:01.235 | 428 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr381_orgn0.pces | |||||||||
| node4 | 5m 59.023s | 2025-10-19 05:51:01.236 | 429 | WARN | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | No preconsensus event files meeting specified criteria found to copy. Lower bound: 739 | |
| node4 | 5m 59.031s | 2025-10-19 05:51:01.244 | 430 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 766 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/766 {"round":766,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/766/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 5m 59.035s | 2025-10-19 05:51:01.248 | 431 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 238.0 ms in RECONNECT_COMPLETE. Now in CHECKING | |
| node1 | 5m 59.108s | 2025-10-19 05:51:01.321 | 9152 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 769 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/769 | |
| node1 | 5m 59.109s | 2025-10-19 05:51:01.322 | 9153 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 769 | |
| node3 | 5m 59.114s | 2025-10-19 05:51:01.327 | 8755 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 769 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/769 | |
| node3 | 5m 59.115s | 2025-10-19 05:51:01.328 | 8756 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 769 | |
| node0 | 5m 59.171s | 2025-10-19 05:51:01.384 | 8930 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 769 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/769 | |
| node2 | 5m 59.171s | 2025-10-19 05:51:01.384 | 8808 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 769 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/769 | |
| node2 | 5m 59.171s | 2025-10-19 05:51:01.384 | 8809 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/45 for round 769 | |
| node0 | 5m 59.172s | 2025-10-19 05:51:01.385 | 8931 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 769 | |
| node1 | 5m 59.194s | 2025-10-19 05:51:01.407 | 9192 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 769 | |
| node1 | 5m 59.196s | 2025-10-19 05:51:01.409 | 9193 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 769 Timestamp: 2025-10-19T05:51:00.240252Z Next consensus number: 23146 Legacy running event hash: dac073a4ef8e67efbb64ee27f1034bf0502861fbdceea3c36209c08aca48b32a903b933c204caf7ee7ebd00e3fc7f185 Legacy running event mnemonic: pool-innocent-atom-worth Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1136646770 Root hash: d7416a27d4d1525a6dbea3eafaede93e69342744a1d8795d17af0df3a7b80ee8303451a817acd093c0baccc518c05510 (root) VirtualMap state / blood-eager-grass-ensure | |||||||||
| node3 | 5m 59.196s | 2025-10-19 05:51:01.409 | 8791 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 769 | |
| node3 | 5m 59.199s | 2025-10-19 05:51:01.412 | 8792 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 769 Timestamp: 2025-10-19T05:51:00.240252Z Next consensus number: 23146 Legacy running event hash: dac073a4ef8e67efbb64ee27f1034bf0502861fbdceea3c36209c08aca48b32a903b933c204caf7ee7ebd00e3fc7f185 Legacy running event mnemonic: pool-innocent-atom-worth Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1136646770 Root hash: d7416a27d4d1525a6dbea3eafaede93e69342744a1d8795d17af0df3a7b80ee8303451a817acd093c0baccc518c05510 (root) VirtualMap state / blood-eager-grass-ensure | |||||||||
| node1 | 5m 59.203s | 2025-10-19 05:51:01.416 | 9194 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+49+04.922298107Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 5m 59.203s | 2025-10-19 05:51:01.416 | 9195 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 742 File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+49+04.922298107Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 5m 59.203s | 2025-10-19 05:51:01.416 | 9196 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 5m 59.207s | 2025-10-19 05:51:01.420 | 8793 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+49+04.982801688Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 5m 59.208s | 2025-10-19 05:51:01.421 | 9197 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 5m 59.208s | 2025-10-19 05:51:01.421 | 8794 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 742 File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+49+04.982801688Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 5m 59.208s | 2025-10-19 05:51:01.421 | 8795 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 5m 59.209s | 2025-10-19 05:51:01.422 | 9198 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 769 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/769 {"round":769,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/769/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 5m 59.210s | 2025-10-19 05:51:01.423 | 9199 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/92 | |
| node3 | 5m 59.214s | 2025-10-19 05:51:01.427 | 8796 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 5m 59.215s | 2025-10-19 05:51:01.428 | 8797 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 769 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/769 {"round":769,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/769/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 5m 59.217s | 2025-10-19 05:51:01.430 | 8798 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/92 | |
| node2 | 5m 59.249s | 2025-10-19 05:51:01.462 | 8840 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/45 for round 769 | |
| node0 | 5m 59.250s | 2025-10-19 05:51:01.463 | 8970 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 769 | |
| node2 | 5m 59.251s | 2025-10-19 05:51:01.464 | 8841 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 769 Timestamp: 2025-10-19T05:51:00.240252Z Next consensus number: 23146 Legacy running event hash: dac073a4ef8e67efbb64ee27f1034bf0502861fbdceea3c36209c08aca48b32a903b933c204caf7ee7ebd00e3fc7f185 Legacy running event mnemonic: pool-innocent-atom-worth Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1136646770 Root hash: d7416a27d4d1525a6dbea3eafaede93e69342744a1d8795d17af0df3a7b80ee8303451a817acd093c0baccc518c05510 (root) VirtualMap state / blood-eager-grass-ensure | |||||||||
| node0 | 5m 59.252s | 2025-10-19 05:51:01.465 | 8971 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 769 Timestamp: 2025-10-19T05:51:00.240252Z Next consensus number: 23146 Legacy running event hash: dac073a4ef8e67efbb64ee27f1034bf0502861fbdceea3c36209c08aca48b32a903b933c204caf7ee7ebd00e3fc7f185 Legacy running event mnemonic: pool-innocent-atom-worth Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1136646770 Root hash: d7416a27d4d1525a6dbea3eafaede93e69342744a1d8795d17af0df3a7b80ee8303451a817acd093c0baccc518c05510 (root) VirtualMap state / blood-eager-grass-ensure | |||||||||
| node2 | 5m 59.258s | 2025-10-19 05:51:01.471 | 8842 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+49+04.918183583Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 5m 59.258s | 2025-10-19 05:51:01.471 | 8843 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 742 File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+49+04.918183583Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 5m 59.258s | 2025-10-19 05:51:01.471 | 8844 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 5m 59.259s | 2025-10-19 05:51:01.472 | 8972 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+49+04.923731026Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 5m 59.259s | 2025-10-19 05:51:01.472 | 8973 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 742 File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+49+04.923731026Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 5m 59.259s | 2025-10-19 05:51:01.472 | 8974 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 5m 59.263s | 2025-10-19 05:51:01.476 | 8845 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 5m 59.264s | 2025-10-19 05:51:01.477 | 8975 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 5m 59.264s | 2025-10-19 05:51:01.477 | 8976 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 769 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/769 {"round":769,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/769/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 5m 59.265s | 2025-10-19 05:51:01.478 | 8846 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 769 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/769 {"round":769,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/769/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 5m 59.266s | 2025-10-19 05:51:01.479 | 8977 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/92 | |
| node2 | 5m 59.267s | 2025-10-19 05:51:01.480 | 8847 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/92 | |
| node4 | 5m 59.426s | 2025-10-19 05:51:01.639 | 432 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 5m 59.430s | 2025-10-19 05:51:01.643 | 433 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 5m 59.695s | 2025-10-19 05:51:01.908 | 434 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:80f245232cbf BR:764), num remaining: 3 | |
| node4 | 5m 59.697s | 2025-10-19 05:51:01.910 | 435 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:09f4ee8da0f5 BR:764), num remaining: 2 | |
| node4 | 5m 59.697s | 2025-10-19 05:51:01.910 | 436 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:1b652efdb43c BR:764), num remaining: 1 | |
| node4 | 5m 59.697s | 2025-10-19 05:51:01.910 | 437 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:163503aa72f7 BR:764), num remaining: 0 | |
| node4 | 5m 59.831s | 2025-10-19 05:51:02.044 | 450 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 769 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 5m 59.977s | 2025-10-19 05:51:02.190 | 453 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 769 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/769 | |
| node4 | 5m 59.979s | 2025-10-19 05:51:02.192 | 455 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 769 | |
| node4 | 6.002m | 2025-10-19 05:51:02.308 | 525 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 769 | |
| node4 | 6.002m | 2025-10-19 05:51:02.312 | 526 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 769 Timestamp: 2025-10-19T05:51:00.240252Z Next consensus number: 23146 Legacy running event hash: dac073a4ef8e67efbb64ee27f1034bf0502861fbdceea3c36209c08aca48b32a903b933c204caf7ee7ebd00e3fc7f185 Legacy running event mnemonic: pool-innocent-atom-worth Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1136646770 Root hash: d7416a27d4d1525a6dbea3eafaede93e69342744a1d8795d17af0df3a7b80ee8303451a817acd093c0baccc518c05510 (root) VirtualMap state / blood-eager-grass-ensure | |||||||||
| node4 | 6.002m | 2025-10-19 05:51:02.321 | 527 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr381_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+51+01.338466709Z_seq1_minr739_maxr1239_orgn766.pces | |||||||||
| node4 | 6.002m | 2025-10-19 05:51:02.321 | 528 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 742 File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+51+01.338466709Z_seq1_minr739_maxr1239_orgn766.pces | |||||||||
| node4 | 6.002m | 2025-10-19 05:51:02.321 | 529 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 6.002m | 2025-10-19 05:51:02.323 | 530 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 6.002m | 2025-10-19 05:51:02.324 | 531 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 769 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/769 {"round":769,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/769/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 6.002m | 2025-10-19 05:51:02.326 | 532 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |
| node4 | 6m 3.369s | 2025-10-19 05:51:05.582 | 622 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | StatusStateMachine: | Platform spent 4.3 s in CHECKING. Now in ACTIVE | |
| node2 | 6m 59.307s | 2025-10-19 05:52:01.520 | 10247 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 896 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 6m 59.316s | 2025-10-19 05:52:01.529 | 1901 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 896 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 6m 59.322s | 2025-10-19 05:52:01.535 | 10341 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 896 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 6m 59.333s | 2025-10-19 05:52:01.546 | 10170 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 896 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 6m 59.353s | 2025-10-19 05:52:01.566 | 10571 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 896 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 6m 59.491s | 2025-10-19 05:52:01.704 | 10250 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 896 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/896 | |
| node2 | 6m 59.491s | 2025-10-19 05:52:01.704 | 10251 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 896 | |
| node0 | 6m 59.531s | 2025-10-19 05:52:01.744 | 10344 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 896 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/896 | |
| node0 | 6m 59.532s | 2025-10-19 05:52:01.745 | 10345 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 896 | |
| node2 | 6m 59.571s | 2025-10-19 05:52:01.784 | 10290 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 896 | |
| node2 | 6m 59.572s | 2025-10-19 05:52:01.785 | 10291 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 896 Timestamp: 2025-10-19T05:52:00.521849386Z Next consensus number: 27898 Legacy running event hash: 918c11dc79f1e157b9f3fa134c07955a2f62dbdb4f33caee9fdffead263a030e8523341ac4a799ede77417894391b021 Legacy running event mnemonic: soldier-describe-dilemma-steel Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 773084052 Root hash: 9ff5403e2350ffa8b112cc494bc9b3b9575076efea5333bd421c4e7de5e0996f74c39935d3bfdf037f2546bea6044e39 (root) VirtualMap state / accident-amazing-head-arch | |||||||||
| node2 | 6m 59.579s | 2025-10-19 05:52:01.792 | 10292 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+49+04.918183583Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+45+18.277572870Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 6m 59.579s | 2025-10-19 05:52:01.792 | 10293 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 869 File: data/saved/preconsensus-events/2/2025/10/19/2025-10-19T05+49+04.918183583Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 6m 59.579s | 2025-10-19 05:52:01.792 | 10294 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 6m 59.582s | 2025-10-19 05:52:01.795 | 1904 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 896 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/896 | |
| node4 | 6m 59.583s | 2025-10-19 05:52:01.796 | 1905 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 896 | |
| node2 | 6m 59.587s | 2025-10-19 05:52:01.800 | 10295 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 6m 59.587s | 2025-10-19 05:52:01.800 | 10296 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 896 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/896 {"round":896,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/896/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 6m 59.589s | 2025-10-19 05:52:01.802 | 10297 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/225 | |
| node0 | 6m 59.609s | 2025-10-19 05:52:01.822 | 10384 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 896 | |
| node0 | 6m 59.611s | 2025-10-19 05:52:01.824 | 10385 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 896 Timestamp: 2025-10-19T05:52:00.521849386Z Next consensus number: 27898 Legacy running event hash: 918c11dc79f1e157b9f3fa134c07955a2f62dbdb4f33caee9fdffead263a030e8523341ac4a799ede77417894391b021 Legacy running event mnemonic: soldier-describe-dilemma-steel Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 773084052 Root hash: 9ff5403e2350ffa8b112cc494bc9b3b9575076efea5333bd421c4e7de5e0996f74c39935d3bfdf037f2546bea6044e39 (root) VirtualMap state / accident-amazing-head-arch | |||||||||
| node0 | 6m 59.619s | 2025-10-19 05:52:01.832 | 10386 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+45+18.443550920Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+49+04.923731026Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 6m 59.620s | 2025-10-19 05:52:01.833 | 10387 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 869 File: data/saved/preconsensus-events/0/2025/10/19/2025-10-19T05+49+04.923731026Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 6m 59.620s | 2025-10-19 05:52:01.833 | 10388 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 6m 59.631s | 2025-10-19 05:52:01.844 | 10389 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 6m 59.632s | 2025-10-19 05:52:01.845 | 10390 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 896 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/896 {"round":896,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/896/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 6m 59.634s | 2025-10-19 05:52:01.847 | 10391 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/225 | |
| node3 | 6m 59.647s | 2025-10-19 05:52:01.860 | 10173 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 896 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/896 | |
| node3 | 6m 59.648s | 2025-10-19 05:52:01.861 | 10174 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 896 | |
| node4 | 6m 59.698s | 2025-10-19 05:52:01.911 | 1950 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 896 | |
| node4 | 6m 59.700s | 2025-10-19 05:52:01.913 | 1951 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 896 Timestamp: 2025-10-19T05:52:00.521849386Z Next consensus number: 27898 Legacy running event hash: 918c11dc79f1e157b9f3fa134c07955a2f62dbdb4f33caee9fdffead263a030e8523341ac4a799ede77417894391b021 Legacy running event mnemonic: soldier-describe-dilemma-steel Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 773084052 Root hash: 9ff5403e2350ffa8b112cc494bc9b3b9575076efea5333bd421c4e7de5e0996f74c39935d3bfdf037f2546bea6044e39 (root) VirtualMap state / accident-amazing-head-arch | |||||||||
| node4 | 6m 59.710s | 2025-10-19 05:52:01.923 | 1952 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+45+18.383543156Z_seq0_minr1_maxr381_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+51+01.338466709Z_seq1_minr739_maxr1239_orgn766.pces | |||||||||
| node4 | 6m 59.710s | 2025-10-19 05:52:01.923 | 1953 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 869 File: data/saved/preconsensus-events/4/2025/10/19/2025-10-19T05+51+01.338466709Z_seq1_minr739_maxr1239_orgn766.pces | |||||||||
| node4 | 6m 59.710s | 2025-10-19 05:52:01.923 | 1954 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 6m 59.714s | 2025-10-19 05:52:01.927 | 1955 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 6m 59.714s | 2025-10-19 05:52:01.927 | 1956 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 896 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/896 {"round":896,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/896/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 6m 59.716s | 2025-10-19 05:52:01.929 | 1957 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/92 | |
| node1 | 6m 59.717s | 2025-10-19 05:52:01.930 | 10574 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 896 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/896 | |
| node1 | 6m 59.718s | 2025-10-19 05:52:01.931 | 10575 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 896 | |
| node3 | 6m 59.730s | 2025-10-19 05:52:01.943 | 10213 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 896 | |
| node3 | 6m 59.733s | 2025-10-19 05:52:01.946 | 10214 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 896 Timestamp: 2025-10-19T05:52:00.521849386Z Next consensus number: 27898 Legacy running event hash: 918c11dc79f1e157b9f3fa134c07955a2f62dbdb4f33caee9fdffead263a030e8523341ac4a799ede77417894391b021 Legacy running event mnemonic: soldier-describe-dilemma-steel Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 773084052 Root hash: 9ff5403e2350ffa8b112cc494bc9b3b9575076efea5333bd421c4e7de5e0996f74c39935d3bfdf037f2546bea6044e39 (root) VirtualMap state / accident-amazing-head-arch | |||||||||
| node3 | 6m 59.740s | 2025-10-19 05:52:01.953 | 10215 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+49+04.982801688Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+45+18.434400858Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 6m 59.740s | 2025-10-19 05:52:01.953 | 10216 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 869 File: data/saved/preconsensus-events/3/2025/10/19/2025-10-19T05+49+04.982801688Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 6m 59.741s | 2025-10-19 05:52:01.954 | 10217 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 6m 59.751s | 2025-10-19 05:52:01.964 | 10218 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 6m 59.752s | 2025-10-19 05:52:01.965 | 10219 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 896 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/896 {"round":896,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/896/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 6m 59.753s | 2025-10-19 05:52:01.966 | 10220 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/225 | |
| node1 | 6m 59.800s | 2025-10-19 05:52:02.013 | 10606 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 896 | |
| node1 | 6m 59.802s | 2025-10-19 05:52:02.015 | 10607 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 896 Timestamp: 2025-10-19T05:52:00.521849386Z Next consensus number: 27898 Legacy running event hash: 918c11dc79f1e157b9f3fa134c07955a2f62dbdb4f33caee9fdffead263a030e8523341ac4a799ede77417894391b021 Legacy running event mnemonic: soldier-describe-dilemma-steel Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 773084052 Root hash: 9ff5403e2350ffa8b112cc494bc9b3b9575076efea5333bd421c4e7de5e0996f74c39935d3bfdf037f2546bea6044e39 (root) VirtualMap state / accident-amazing-head-arch | |||||||||
| node1 | 6m 59.810s | 2025-10-19 05:52:02.023 | 10608 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+45+18.447974781Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+49+04.922298107Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 6m 59.810s | 2025-10-19 05:52:02.023 | 10609 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 869 File: data/saved/preconsensus-events/1/2025/10/19/2025-10-19T05+49+04.922298107Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 6m 59.810s | 2025-10-19 05:52:02.023 | 10610 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 6m 59.821s | 2025-10-19 05:52:02.034 | 10611 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 6m 59.822s | 2025-10-19 05:52:02.035 | 10612 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 896 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/896 {"round":896,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/896/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 6m 59.823s | 2025-10-19 05:52:02.036 | 10613 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/225 | |
| node0 | 7m 53.883s | 2025-10-19 05:52:56.096 | 11696 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith2 0 to 2>> | NetworkUtils: | Connection broken: 0 -> 2 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-19T05:52:56.095553117Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:384) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:428) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node0 | 7m 53.901s | 2025-10-19 05:52:56.114 | 11697 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 0 to 4>> | NetworkUtils: | Connection broken: 0 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-10-19T05:52:56.111646588Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:384) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:428) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||