| node1 | 0.000ns | 2025-09-24 15:16:58.083 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node1 | 90.000ms | 2025-09-24 15:16:58.173 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node1 | 106.000ms | 2025-09-24 15:16:58.189 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 222.000ms | 2025-09-24 15:16:58.305 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [1] are set to run locally | |
| node1 | 229.000ms | 2025-09-24 15:16:58.312 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node1 | 241.000ms | 2025-09-24 15:16:58.324 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node3 | 427.000ms | 2025-09-24 15:16:58.510 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node3 | 529.000ms | 2025-09-24 15:16:58.612 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node3 | 547.000ms | 2025-09-24 15:16:58.630 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 599.000ms | 2025-09-24 15:16:58.682 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node1 | 662.000ms | 2025-09-24 15:16:58.745 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node1 | 663.000ms | 2025-09-24 15:16:58.746 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node3 | 672.000ms | 2025-09-24 15:16:58.755 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [3] are set to run locally | |
| node3 | 680.000ms | 2025-09-24 15:16:58.763 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node4 | 687.000ms | 2025-09-24 15:16:58.770 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node3 | 693.000ms | 2025-09-24 15:16:58.776 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 703.000ms | 2025-09-24 15:16:58.786 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 823.000ms | 2025-09-24 15:16:58.906 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node2 | 828.000ms | 2025-09-24 15:16:58.911 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node4 | 831.000ms | 2025-09-24 15:16:58.914 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node4 | 845.000ms | 2025-09-24 15:16:58.928 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node2 | 927.000ms | 2025-09-24 15:16:59.010 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node2 | 945.000ms | 2025-09-24 15:16:59.028 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 1.072s | 2025-09-24 15:16:59.155 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [2] are set to run locally | |
| node2 | 1.079s | 2025-09-24 15:16:59.162 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node2 | 1.092s | 2025-09-24 15:16:59.175 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node3 | 1.154s | 2025-09-24 15:16:59.237 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node3 | 1.155s | 2025-09-24 15:16:59.238 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node4 | 1.309s | 2025-09-24 15:16:59.392 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node4 | 1.309s | 2025-09-24 15:16:59.392 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node2 | 1.538s | 2025-09-24 15:16:59.621 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node2 | 1.539s | 2025-09-24 15:16:59.622 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node1 | 1.595s | 2025-09-24 15:16:59.678 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 932ms | |
| node1 | 1.603s | 2025-09-24 15:16:59.686 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node1 | 1.606s | 2025-09-24 15:16:59.689 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 1.647s | 2025-09-24 15:16:59.730 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node1 | 1.707s | 2025-09-24 15:16:59.790 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node1 | 1.708s | 2025-09-24 15:16:59.791 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node3 | 2.095s | 2025-09-24 15:17:00.178 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 939ms | |
| node3 | 2.108s | 2025-09-24 15:17:00.191 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node3 | 2.112s | 2025-09-24 15:17:00.195 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node3 | 2.157s | 2025-09-24 15:17:00.240 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node3 | 2.219s | 2025-09-24 15:17:00.302 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node3 | 2.220s | 2025-09-24 15:17:00.303 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 2.382s | 2025-09-24 15:17:00.465 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1071ms | |
| node4 | 2.391s | 2025-09-24 15:17:00.474 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 2.394s | 2025-09-24 15:17:00.477 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 2.434s | 2025-09-24 15:17:00.517 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 2.496s | 2025-09-24 15:17:00.579 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 2.496s | 2025-09-24 15:17:00.579 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node0 | 2.626s | 2025-09-24 15:17:00.709 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node2 | 2.663s | 2025-09-24 15:17:00.746 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1124ms | |
| node2 | 2.675s | 2025-09-24 15:17:00.758 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node2 | 2.678s | 2025-09-24 15:17:00.761 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 2.718s | 2025-09-24 15:17:00.801 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node0 | 2.728s | 2025-09-24 15:17:00.811 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node0 | 2.745s | 2025-09-24 15:17:00.828 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 2.781s | 2025-09-24 15:17:00.864 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node2 | 2.782s | 2025-09-24 15:17:00.865 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node0 | 2.876s | 2025-09-24 15:17:00.959 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [0] are set to run locally | |
| node0 | 2.885s | 2025-09-24 15:17:00.968 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node0 | 2.899s | 2025-09-24 15:17:00.982 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 3.424s | 2025-09-24 15:17:01.507 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node0 | 3.425s | 2025-09-24 15:17:01.508 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node1 | 3.718s | 2025-09-24 15:17:01.801 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node1 | 3.802s | 2025-09-24 15:17:01.885 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 3.805s | 2025-09-24 15:17:01.888 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node1 | 3.806s | 2025-09-24 15:17:01.889 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node3 | 4.305s | 2025-09-24 15:17:02.388 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node3 | 4.393s | 2025-09-24 15:17:02.476 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 4.395s | 2025-09-24 15:17:02.478 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node3 | 4.396s | 2025-09-24 15:17:02.479 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 4.524s | 2025-09-24 15:17:02.607 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node0 | 4.545s | 2025-09-24 15:17:02.628 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1119ms | |
| node0 | 4.557s | 2025-09-24 15:17:02.640 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node0 | 4.561s | 2025-09-24 15:17:02.644 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 4.580s | 2025-09-24 15:17:02.663 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 4.583s | 2025-09-24 15:17:02.666 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node1 | 4.588s | 2025-09-24 15:17:02.671 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node1 | 4.599s | 2025-09-24 15:17:02.682 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 4.602s | 2025-09-24 15:17:02.685 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.609s | 2025-09-24 15:17:02.692 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 4.613s | 2025-09-24 15:17:02.696 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 4.615s | 2025-09-24 15:17:02.698 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node4 | 4.616s | 2025-09-24 15:17:02.699 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 4.683s | 2025-09-24 15:17:02.766 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node0 | 4.684s | 2025-09-24 15:17:02.767 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node2 | 4.962s | 2025-09-24 15:17:03.045 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node2 | 5.057s | 2025-09-24 15:17:03.140 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 5.060s | 2025-09-24 15:17:03.143 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node2 | 5.061s | 2025-09-24 15:17:03.144 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node3 | 5.241s | 2025-09-24 15:17:03.324 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 5.244s | 2025-09-24 15:17:03.327 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node3 | 5.251s | 2025-09-24 15:17:03.334 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node3 | 5.262s | 2025-09-24 15:17:03.345 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 5.264s | 2025-09-24 15:17:03.347 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5.445s | 2025-09-24 15:17:03.528 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5.448s | 2025-09-24 15:17:03.531 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 5.456s | 2025-09-24 15:17:03.539 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node4 | 5.469s | 2025-09-24 15:17:03.552 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5.472s | 2025-09-24 15:17:03.555 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 5.710s | 2025-09-24 15:17:03.793 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26214882] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=246090, randomLong=6325065182182681180, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=12131, randomLong=-3939866847143942541, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1172999, data=35, exception=null] OS Health Check Report - Complete (took 1021 ms) | |||||||||
| node1 | 5.742s | 2025-09-24 15:17:03.825 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node1 | 5.751s | 2025-09-24 15:17:03.834 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node1 | 5.756s | 2025-09-24 15:17:03.839 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node1 | 5.834s | 2025-09-24 15:17:03.917 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAANA==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IjksRQ==", "port": 30125 }, { "ipAddressV4": "CoAAMw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iodj+Q==", "port": 30126 }, { "ipAddressV4": "CoAAMQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IkIKcw==", "port": 30127 }, { "ipAddressV4": "CoAAMg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAANQ==", "port": 30128 }] }] } | |||||||||
| node1 | 5.854s | 2025-09-24 15:17:03.937 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv | |
| node1 | 5.854s | 2025-09-24 15:17:03.937 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node1 | 5.868s | 2025-09-24 15:17:03.951 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 06db5e8df89d8b5fb2275ff57c4243fca3ffbf2fcf4d068fbccfe056959778ce6d40eda9a9327b0b19f780c02bdc025f (root) ConsistencyTestingToolState / swarm-spider-unaware-neither 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise | |||||||||
| node2 | 5.931s | 2025-09-24 15:17:04.014 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 5.934s | 2025-09-24 15:17:04.017 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node2 | 5.942s | 2025-09-24 15:17:04.025 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node2 | 5.956s | 2025-09-24 15:17:04.039 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 5.958s | 2025-09-24 15:17:04.041 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 6.084s | 2025-09-24 15:17:04.167 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node1 | 6.088s | 2025-09-24 15:17:04.171 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node1 | 6.093s | 2025-09-24 15:17:04.176 | 47 | INFO | STARTUP | <<start-node-1>> | ConsistencyTestingToolMain: | init called in Main for node 1. | |
| node1 | 6.093s | 2025-09-24 15:17:04.176 | 48 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | Starting platform 1 | |
| node1 | 6.095s | 2025-09-24 15:17:04.178 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node1 | 6.098s | 2025-09-24 15:17:04.181 | 50 | INFO | STARTUP | <<start-node-1>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node1 | 6.099s | 2025-09-24 15:17:04.182 | 51 | INFO | STARTUP | <<start-node-1>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node1 | 6.100s | 2025-09-24 15:17:04.183 | 52 | INFO | STARTUP | <<start-node-1>> | InputWireChecks: | All input wires have been bound. | |
| node1 | 6.101s | 2025-09-24 15:17:04.184 | 53 | WARN | STARTUP | <<start-node-1>> | PcesFileTracker: | No preconsensus event files available | |
| node1 | 6.101s | 2025-09-24 15:17:04.184 | 54 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node1 | 6.103s | 2025-09-24 15:17:04.186 | 55 | INFO | STARTUP | <<start-node-1>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node1 | 6.104s | 2025-09-24 15:17:04.187 | 56 | INFO | STARTUP | <<app: appMain 1>> | ConsistencyTestingToolMain: | run called in Main. | |
| node1 | 6.105s | 2025-09-24 15:17:04.188 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 182.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node1 | 6.110s | 2025-09-24 15:17:04.193 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node3 | 6.376s | 2025-09-24 15:17:04.459 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26286531] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=225820, randomLong=5989717832050948180, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=8710, randomLong=3246904183020734790, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1510349, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms) | |||||||||
| node3 | 6.410s | 2025-09-24 15:17:04.493 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node3 | 6.419s | 2025-09-24 15:17:04.502 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node3 | 6.425s | 2025-09-24 15:17:04.508 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node3 | 6.516s | 2025-09-24 15:17:04.599 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAANA==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IjksRQ==", "port": 30125 }, { "ipAddressV4": "CoAAMw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iodj+Q==", "port": 30126 }, { "ipAddressV4": "CoAAMQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IkIKcw==", "port": 30127 }, { "ipAddressV4": "CoAAMg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAANQ==", "port": 30128 }] }] } | |||||||||
| node3 | 6.540s | 2025-09-24 15:17:04.623 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv | |
| node3 | 6.541s | 2025-09-24 15:17:04.624 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node3 | 6.558s | 2025-09-24 15:17:04.641 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 06db5e8df89d8b5fb2275ff57c4243fca3ffbf2fcf4d068fbccfe056959778ce6d40eda9a9327b0b19f780c02bdc025f (root) ConsistencyTestingToolState / swarm-spider-unaware-neither 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise | |||||||||
| node4 | 6.593s | 2025-09-24 15:17:04.676 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26248099] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=208240, randomLong=-5395224680014756144, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=13240, randomLong=-4316197240935539911, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1210130, data=35, exception=null] OS Health Check Report - Complete (took 1023 ms) | |||||||||
| node4 | 6.625s | 2025-09-24 15:17:04.708 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 6.634s | 2025-09-24 15:17:04.717 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 6.640s | 2025-09-24 15:17:04.723 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 6.724s | 2025-09-24 15:17:04.807 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAANA==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IjksRQ==", "port": 30125 }, { "ipAddressV4": "CoAAMw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iodj+Q==", "port": 30126 }, { "ipAddressV4": "CoAAMQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IkIKcw==", "port": 30127 }, { "ipAddressV4": "CoAAMg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAANQ==", "port": 30128 }] }] } | |||||||||
| node4 | 6.746s | 2025-09-24 15:17:04.829 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 6.747s | 2025-09-24 15:17:04.830 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node4 | 6.762s | 2025-09-24 15:17:04.845 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 06db5e8df89d8b5fb2275ff57c4243fca3ffbf2fcf4d068fbccfe056959778ce6d40eda9a9327b0b19f780c02bdc025f (root) ConsistencyTestingToolState / swarm-spider-unaware-neither 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise | |||||||||
| node3 | 6.783s | 2025-09-24 15:17:04.866 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node3 | 6.789s | 2025-09-24 15:17:04.872 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node3 | 6.795s | 2025-09-24 15:17:04.878 | 47 | INFO | STARTUP | <<start-node-3>> | ConsistencyTestingToolMain: | init called in Main for node 3. | |
| node3 | 6.796s | 2025-09-24 15:17:04.879 | 48 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | Starting platform 3 | |
| node3 | 6.797s | 2025-09-24 15:17:04.880 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node3 | 6.801s | 2025-09-24 15:17:04.884 | 50 | INFO | STARTUP | <<start-node-3>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node3 | 6.802s | 2025-09-24 15:17:04.885 | 51 | INFO | STARTUP | <<start-node-3>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node3 | 6.803s | 2025-09-24 15:17:04.886 | 52 | INFO | STARTUP | <<start-node-3>> | InputWireChecks: | All input wires have been bound. | |
| node3 | 6.805s | 2025-09-24 15:17:04.888 | 53 | WARN | STARTUP | <<start-node-3>> | PcesFileTracker: | No preconsensus event files available | |
| node3 | 6.805s | 2025-09-24 15:17:04.888 | 54 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node3 | 6.806s | 2025-09-24 15:17:04.889 | 55 | INFO | STARTUP | <<start-node-3>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node3 | 6.808s | 2025-09-24 15:17:04.891 | 56 | INFO | STARTUP | <<app: appMain 3>> | ConsistencyTestingToolMain: | run called in Main. | |
| node3 | 6.809s | 2025-09-24 15:17:04.892 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 185.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node3 | 6.814s | 2025-09-24 15:17:04.897 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node0 | 6.885s | 2025-09-24 15:17:04.968 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 6.971s | 2025-09-24 15:17:05.054 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node0 | 6.976s | 2025-09-24 15:17:05.059 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6.977s | 2025-09-24 15:17:05.060 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node0 | 6.979s | 2025-09-24 15:17:05.062 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node0 | 6.980s | 2025-09-24 15:17:05.063 | 22 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 6.983s | 2025-09-24 15:17:05.066 | 47 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 6.984s | 2025-09-24 15:17:05.067 | 48 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 6.985s | 2025-09-24 15:17:05.068 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 6.989s | 2025-09-24 15:17:05.072 | 50 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 6.990s | 2025-09-24 15:17:05.073 | 51 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 6.991s | 2025-09-24 15:17:05.074 | 52 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 6.993s | 2025-09-24 15:17:05.076 | 53 | WARN | STARTUP | <<start-node-4>> | PcesFileTracker: | No preconsensus event files available | |
| node4 | 6.993s | 2025-09-24 15:17:05.076 | 54 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node4 | 6.994s | 2025-09-24 15:17:05.077 | 55 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node4 | 6.995s | 2025-09-24 15:17:05.078 | 56 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 6.997s | 2025-09-24 15:17:05.080 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 177.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 7.003s | 2025-09-24 15:17:05.086 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node2 | 7.080s | 2025-09-24 15:17:05.163 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26238900] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=221260, randomLong=2247484104997746047, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=28020, randomLong=2249827863274467436, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1775661, data=35, exception=null] OS Health Check Report - Complete (took 1027 ms) | |||||||||
| node2 | 7.114s | 2025-09-24 15:17:05.197 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node2 | 7.123s | 2025-09-24 15:17:05.206 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node2 | 7.129s | 2025-09-24 15:17:05.212 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node2 | 7.217s | 2025-09-24 15:17:05.300 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAANA==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IjksRQ==", "port": 30125 }, { "ipAddressV4": "CoAAMw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iodj+Q==", "port": 30126 }, { "ipAddressV4": "CoAAMQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IkIKcw==", "port": 30127 }, { "ipAddressV4": "CoAAMg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAANQ==", "port": 30128 }] }] } | |||||||||
| node2 | 7.240s | 2025-09-24 15:17:05.323 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv | |
| node2 | 7.241s | 2025-09-24 15:17:05.324 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node2 | 7.257s | 2025-09-24 15:17:05.340 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 06db5e8df89d8b5fb2275ff57c4243fca3ffbf2fcf4d068fbccfe056959778ce6d40eda9a9327b0b19f780c02bdc025f (root) ConsistencyTestingToolState / swarm-spider-unaware-neither 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise | |||||||||
| node2 | 7.489s | 2025-09-24 15:17:05.572 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node2 | 7.496s | 2025-09-24 15:17:05.579 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node2 | 7.502s | 2025-09-24 15:17:05.585 | 47 | INFO | STARTUP | <<start-node-2>> | ConsistencyTestingToolMain: | init called in Main for node 2. | |
| node2 | 7.503s | 2025-09-24 15:17:05.586 | 48 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | Starting platform 2 | |
| node2 | 7.504s | 2025-09-24 15:17:05.587 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node2 | 7.508s | 2025-09-24 15:17:05.591 | 50 | INFO | STARTUP | <<start-node-2>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node2 | 7.510s | 2025-09-24 15:17:05.593 | 51 | INFO | STARTUP | <<start-node-2>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node2 | 7.510s | 2025-09-24 15:17:05.593 | 52 | INFO | STARTUP | <<start-node-2>> | InputWireChecks: | All input wires have been bound. | |
| node2 | 7.512s | 2025-09-24 15:17:05.595 | 53 | WARN | STARTUP | <<start-node-2>> | PcesFileTracker: | No preconsensus event files available | |
| node2 | 7.512s | 2025-09-24 15:17:05.595 | 54 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node2 | 7.514s | 2025-09-24 15:17:05.597 | 55 | INFO | STARTUP | <<start-node-2>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node2 | 7.516s | 2025-09-24 15:17:05.599 | 56 | INFO | STARTUP | <<app: appMain 2>> | ConsistencyTestingToolMain: | run called in Main. | |
| node2 | 7.517s | 2025-09-24 15:17:05.600 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 199.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node2 | 7.524s | 2025-09-24 15:17:05.607 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node0 | 7.947s | 2025-09-24 15:17:06.030 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 7.952s | 2025-09-24 15:17:06.035 | 32 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node0 | 7.960s | 2025-09-24 15:17:06.043 | 33 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node0 | 7.977s | 2025-09-24 15:17:06.060 | 34 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 7.980s | 2025-09-24 15:17:06.063 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 9.107s | 2025-09-24 15:17:07.190 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ] | |
| node1 | 9.109s | 2025-09-24 15:17:07.192 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node0 | 9.127s | 2025-09-24 15:17:07.210 | 36 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26017849] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=362080, randomLong=-3314450088465430382, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=25310, randomLong=4817567997691358675, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1860430, data=35, exception=null] OS Health Check Report - Complete (took 1035 ms) | |||||||||
| node0 | 9.169s | 2025-09-24 15:17:07.252 | 37 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node0 | 9.181s | 2025-09-24 15:17:07.264 | 38 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node0 | 9.189s | 2025-09-24 15:17:07.272 | 39 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node0 | 9.306s | 2025-09-24 15:17:07.389 | 40 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAANA==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IjksRQ==", "port": 30125 }, { "ipAddressV4": "CoAAMw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iodj+Q==", "port": 30126 }, { "ipAddressV4": "CoAAMQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IkIKcw==", "port": 30127 }, { "ipAddressV4": "CoAAMg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAANQ==", "port": 30128 }] }] } | |||||||||
| node0 | 9.338s | 2025-09-24 15:17:07.421 | 41 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv | |
| node0 | 9.339s | 2025-09-24 15:17:07.422 | 42 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node0 | 9.360s | 2025-09-24 15:17:07.443 | 43 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 06db5e8df89d8b5fb2275ff57c4243fca3ffbf2fcf4d068fbccfe056959778ce6d40eda9a9327b0b19f780c02bdc025f (root) ConsistencyTestingToolState / swarm-spider-unaware-neither 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise | |||||||||
| node0 | 9.644s | 2025-09-24 15:17:07.727 | 45 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node0 | 9.650s | 2025-09-24 15:17:07.733 | 46 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node0 | 9.656s | 2025-09-24 15:17:07.739 | 47 | INFO | STARTUP | <<start-node-0>> | ConsistencyTestingToolMain: | init called in Main for node 0. | |
| node0 | 9.657s | 2025-09-24 15:17:07.740 | 48 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | Starting platform 0 | |
| node0 | 9.658s | 2025-09-24 15:17:07.741 | 49 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node0 | 9.662s | 2025-09-24 15:17:07.745 | 50 | INFO | STARTUP | <<start-node-0>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node0 | 9.664s | 2025-09-24 15:17:07.747 | 51 | INFO | STARTUP | <<start-node-0>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node0 | 9.664s | 2025-09-24 15:17:07.747 | 52 | INFO | STARTUP | <<start-node-0>> | InputWireChecks: | All input wires have been bound. | |
| node0 | 9.666s | 2025-09-24 15:17:07.749 | 53 | WARN | STARTUP | <<start-node-0>> | PcesFileTracker: | No preconsensus event files available | |
| node0 | 9.666s | 2025-09-24 15:17:07.749 | 54 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node0 | 9.668s | 2025-09-24 15:17:07.751 | 55 | INFO | STARTUP | <<start-node-0>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node0 | 9.669s | 2025-09-24 15:17:07.752 | 56 | INFO | STARTUP | <<app: appMain 0>> | ConsistencyTestingToolMain: | run called in Main. | |
| node0 | 9.673s | 2025-09-24 15:17:07.756 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | DefaultStatusStateMachine: | Platform spent 237.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node0 | 9.692s | 2025-09-24 15:17:07.775 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | DefaultStatusStateMachine: | Platform spent 15.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node3 | 9.809s | 2025-09-24 15:17:07.892 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ] | |
| node3 | 9.811s | 2025-09-24 15:17:07.894 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 9.996s | 2025-09-24 15:17:08.079 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 9.999s | 2025-09-24 15:17:08.082 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node2 | 10.518s | 2025-09-24 15:17:08.601 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ] | |
| node2 | 10.522s | 2025-09-24 15:17:08.605 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node0 | 12.671s | 2025-09-24 15:17:10.754 | 59 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ] | |
| node0 | 12.674s | 2025-09-24 15:17:10.757 | 60 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node1 | 16.202s | 2025-09-24 15:17:14.285 | 61 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node3 | 16.904s | 2025-09-24 15:17:14.987 | 61 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node4 | 17.092s | 2025-09-24 15:17:15.175 | 61 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node2 | 17.612s | 2025-09-24 15:17:15.695 | 61 | INFO | PLATFORM_STATUS | <platformForkJoinThread-8> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node3 | 19.443s | 2025-09-24 15:17:17.526 | 63 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node4 | 19.481s | 2025-09-24 15:17:17.564 | 63 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node1 | 19.496s | 2025-09-24 15:17:17.579 | 62 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 3.3 s in CHECKING. Now in ACTIVE | |
| node1 | 19.498s | 2025-09-24 15:17:17.581 | 64 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node2 | 19.626s | 2025-09-24 15:17:17.709 | 63 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node0 | 19.767s | 2025-09-24 15:17:17.850 | 61 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | DefaultStatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 19.807s | 2025-09-24 15:17:17.890 | 63 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node4 | 19.996s | 2025-09-24 15:17:18.079 | 78 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |
| node3 | 19.997s | 2025-09-24 15:17:18.080 | 78 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 | |
| node4 | 19.998s | 2025-09-24 15:17:18.081 | 79 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node3 | 19.999s | 2025-09-24 15:17:18.082 | 79 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node4 | 20.011s | 2025-09-24 15:17:18.094 | 90 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 2.9 s in CHECKING. Now in ACTIVE | |
| node3 | 20.012s | 2025-09-24 15:17:18.095 | 91 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | DefaultStatusStateMachine: | Platform spent 3.1 s in CHECKING. Now in ACTIVE | |
| node0 | 20.067s | 2025-09-24 15:17:18.150 | 78 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 | |
| node0 | 20.070s | 2025-09-24 15:17:18.153 | 79 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node1 | 20.179s | 2025-09-24 15:17:18.262 | 79 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 | |
| node2 | 20.180s | 2025-09-24 15:17:18.263 | 78 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 | |
| node1 | 20.182s | 2025-09-24 15:17:18.265 | 81 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node2 | 20.182s | 2025-09-24 15:17:18.265 | 80 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node2 | 20.182s | 2025-09-24 15:17:18.265 | 81 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | DefaultStatusStateMachine: | Platform spent 2.6 s in CHECKING. Now in ACTIVE | |
| node4 | 20.251s | 2025-09-24 15:17:18.334 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node3 | 20.252s | 2025-09-24 15:17:18.335 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node3 | 20.254s | 2025-09-24 15:17:18.337 | 118 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-09-24T15:17:15.185455115Z Next consensus number: 1 Legacy running event hash: 9a62b56f732afbe42e12a058d91987314f539e377354851f04635ec947c846e280327643b5e26b4153d8f08761c97e91 Legacy running event mnemonic: record-fall-grab-logic Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 5de8fbaf91c4e7642693bb826bd6caee5fbeaf061cce4e95a58206600b3d033809244b811cfd4aaf40f40d92d180dc15 (root) ConsistencyTestingToolState / rug-mechanic-shy-bean 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 auto-moon-limit-hood 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node4 | 20.254s | 2025-09-24 15:17:18.337 | 118 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-09-24T15:17:15.185455115Z Next consensus number: 1 Legacy running event hash: 9a62b56f732afbe42e12a058d91987314f539e377354851f04635ec947c846e280327643b5e26b4153d8f08761c97e91 Legacy running event mnemonic: record-fall-grab-logic Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 5de8fbaf91c4e7642693bb826bd6caee5fbeaf061cce4e95a58206600b3d033809244b811cfd4aaf40f40d92d180dc15 (root) ConsistencyTestingToolState / rug-mechanic-shy-bean 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 auto-moon-limit-hood 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node3 | 20.287s | 2025-09-24 15:17:18.370 | 119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 20.288s | 2025-09-24 15:17:18.371 | 120 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 20.288s | 2025-09-24 15:17:18.371 | 121 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 20.289s | 2025-09-24 15:17:18.372 | 122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 20.292s | 2025-09-24 15:17:18.375 | 119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 20.293s | 2025-09-24 15:17:18.376 | 120 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 20.293s | 2025-09-24 15:17:18.376 | 121 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 20.294s | 2025-09-24 15:17:18.377 | 123 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 20.294s | 2025-09-24 15:17:18.377 | 122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 20.299s | 2025-09-24 15:17:18.382 | 123 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 20.375s | 2025-09-24 15:17:18.458 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node0 | 20.379s | 2025-09-24 15:17:18.462 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-09-24T15:17:15.185455115Z Next consensus number: 1 Legacy running event hash: 9a62b56f732afbe42e12a058d91987314f539e377354851f04635ec947c846e280327643b5e26b4153d8f08761c97e91 Legacy running event mnemonic: record-fall-grab-logic Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 5de8fbaf91c4e7642693bb826bd6caee5fbeaf061cce4e95a58206600b3d033809244b811cfd4aaf40f40d92d180dc15 (root) ConsistencyTestingToolState / rug-mechanic-shy-bean 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 auto-moon-limit-hood 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node1 | 20.427s | 2025-09-24 15:17:18.510 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node0 | 20.428s | 2025-09-24 15:17:18.511 | 118 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 20.429s | 2025-09-24 15:17:18.512 | 119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 20.429s | 2025-09-24 15:17:18.512 | 120 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 20.430s | 2025-09-24 15:17:18.513 | 118 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-09-24T15:17:15.185455115Z Next consensus number: 1 Legacy running event hash: 9a62b56f732afbe42e12a058d91987314f539e377354851f04635ec947c846e280327643b5e26b4153d8f08761c97e91 Legacy running event mnemonic: record-fall-grab-logic Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 5de8fbaf91c4e7642693bb826bd6caee5fbeaf061cce4e95a58206600b3d033809244b811cfd4aaf40f40d92d180dc15 (root) ConsistencyTestingToolState / rug-mechanic-shy-bean 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 auto-moon-limit-hood 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node0 | 20.431s | 2025-09-24 15:17:18.514 | 121 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 20.438s | 2025-09-24 15:17:18.521 | 122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 20.462s | 2025-09-24 15:17:18.545 | 119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 20.463s | 2025-09-24 15:17:18.546 | 120 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 20.463s | 2025-09-24 15:17:18.546 | 121 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 20.464s | 2025-09-24 15:17:18.547 | 122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 20.470s | 2025-09-24 15:17:18.553 | 123 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 20.477s | 2025-09-24 15:17:18.560 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node2 | 20.480s | 2025-09-24 15:17:18.563 | 118 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-09-24T15:17:15.185455115Z Next consensus number: 1 Legacy running event hash: 9a62b56f732afbe42e12a058d91987314f539e377354851f04635ec947c846e280327643b5e26b4153d8f08761c97e91 Legacy running event mnemonic: record-fall-grab-logic Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 5de8fbaf91c4e7642693bb826bd6caee5fbeaf061cce4e95a58206600b3d033809244b811cfd4aaf40f40d92d180dc15 (root) ConsistencyTestingToolState / rug-mechanic-shy-bean 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 auto-moon-limit-hood 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom | |||||||||
| node2 | 20.520s | 2025-09-24 15:17:18.603 | 119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 20.521s | 2025-09-24 15:17:18.604 | 120 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 20.521s | 2025-09-24 15:17:18.604 | 121 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 20.523s | 2025-09-24 15:17:18.606 | 122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 20.529s | 2025-09-24 15:17:18.612 | 123 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 21.487s | 2025-09-24 15:17:19.570 | 137 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 1.7 s in CHECKING. Now in ACTIVE | |
| node3 | 1m 3.231s | 2025-09-24 15:18:01.314 | 813 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 66 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 1m 3.354s | 2025-09-24 15:18:01.437 | 819 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 66 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 1m 3.366s | 2025-09-24 15:18:01.449 | 827 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 66 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 1m 3.455s | 2025-09-24 15:18:01.538 | 833 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 66 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 1m 3.508s | 2025-09-24 15:18:01.591 | 809 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 66 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 1m 3.789s | 2025-09-24 15:18:01.872 | 840 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 66 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/66 | |
| node1 | 1m 3.791s | 2025-09-24 15:18:01.874 | 841 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node0 | 1m 3.802s | 2025-09-24 15:18:01.885 | 836 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 66 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/66 | |
| node0 | 1m 3.803s | 2025-09-24 15:18:01.886 | 837 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node2 | 1m 3.872s | 2025-09-24 15:18:01.955 | 812 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 66 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/66 | |
| node2 | 1m 3.873s | 2025-09-24 15:18:01.956 | 813 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node1 | 1m 3.874s | 2025-09-24 15:18:01.957 | 872 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node1 | 1m 3.877s | 2025-09-24 15:18:01.960 | 873 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 66 Timestamp: 2025-09-24T15:18:00.024109524Z Next consensus number: 1801 Legacy running event hash: dae048f3e62bd5f474c3d3d86670f3645b74bd26570b46577fa6d4ecd4200eae47f8bc52f4b095d180a4058f9500fd84 Legacy running event mnemonic: piano-bunker-deny-dose Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1566185131 Root hash: 6f5bc04d17f1fdc049a96fb573e77293cdf5b53f9fd764f57e3342e8645d3ab570db1f3eedf19aedd29b3fda5e7e293a (root) ConsistencyTestingToolState / twin-wait-wreck-elite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 twin-ability-space-hamster 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1087636984456271537 /3 ritual-van-ensure-auto 4 StringLeaf 66 /4 control-draft-tuition-odor | |||||||||
| node1 | 1m 3.886s | 2025-09-24 15:18:01.969 | 874 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 3.887s | 2025-09-24 15:18:01.970 | 875 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 39 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 3.887s | 2025-09-24 15:18:01.970 | 876 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 1m 3.888s | 2025-09-24 15:18:01.971 | 877 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 1m 3.889s | 2025-09-24 15:18:01.972 | 878 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 66 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/66 {"round":66,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/66/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 1m 3.908s | 2025-09-24 15:18:01.991 | 872 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node0 | 1m 3.911s | 2025-09-24 15:18:01.994 | 873 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 66 Timestamp: 2025-09-24T15:18:00.024109524Z Next consensus number: 1801 Legacy running event hash: dae048f3e62bd5f474c3d3d86670f3645b74bd26570b46577fa6d4ecd4200eae47f8bc52f4b095d180a4058f9500fd84 Legacy running event mnemonic: piano-bunker-deny-dose Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1566185131 Root hash: 6f5bc04d17f1fdc049a96fb573e77293cdf5b53f9fd764f57e3342e8645d3ab570db1f3eedf19aedd29b3fda5e7e293a (root) ConsistencyTestingToolState / twin-wait-wreck-elite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 twin-ability-space-hamster 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1087636984456271537 /3 ritual-van-ensure-auto 4 StringLeaf 66 /4 control-draft-tuition-odor | |||||||||
| node0 | 1m 3.921s | 2025-09-24 15:18:02.004 | 874 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 3.922s | 2025-09-24 15:18:02.005 | 875 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 39 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 3.922s | 2025-09-24 15:18:02.005 | 876 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 1m 3.924s | 2025-09-24 15:18:02.007 | 877 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 1m 3.924s | 2025-09-24 15:18:02.007 | 878 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 66 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/66 {"round":66,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/66/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 1m 3.963s | 2025-09-24 15:18:02.046 | 848 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node2 | 1m 3.965s | 2025-09-24 15:18:02.048 | 849 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 66 Timestamp: 2025-09-24T15:18:00.024109524Z Next consensus number: 1801 Legacy running event hash: dae048f3e62bd5f474c3d3d86670f3645b74bd26570b46577fa6d4ecd4200eae47f8bc52f4b095d180a4058f9500fd84 Legacy running event mnemonic: piano-bunker-deny-dose Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1566185131 Root hash: 6f5bc04d17f1fdc049a96fb573e77293cdf5b53f9fd764f57e3342e8645d3ab570db1f3eedf19aedd29b3fda5e7e293a (root) ConsistencyTestingToolState / twin-wait-wreck-elite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 twin-ability-space-hamster 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1087636984456271537 /3 ritual-van-ensure-auto 4 StringLeaf 66 /4 control-draft-tuition-odor | |||||||||
| node2 | 1m 3.975s | 2025-09-24 15:18:02.058 | 850 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 3.975s | 2025-09-24 15:18:02.058 | 851 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 39 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 3.976s | 2025-09-24 15:18:02.059 | 852 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 1m 3.977s | 2025-09-24 15:18:02.060 | 853 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 1m 3.978s | 2025-09-24 15:18:02.061 | 854 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 66 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/66 {"round":66,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/66/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 1m 4.014s | 2025-09-24 15:18:02.097 | 832 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 66 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/66 | |
| node4 | 1m 4.014s | 2025-09-24 15:18:02.097 | 833 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node4 | 1m 4.098s | 2025-09-24 15:18:02.181 | 864 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node4 | 1m 4.100s | 2025-09-24 15:18:02.183 | 865 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 66 Timestamp: 2025-09-24T15:18:00.024109524Z Next consensus number: 1801 Legacy running event hash: dae048f3e62bd5f474c3d3d86670f3645b74bd26570b46577fa6d4ecd4200eae47f8bc52f4b095d180a4058f9500fd84 Legacy running event mnemonic: piano-bunker-deny-dose Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1566185131 Root hash: 6f5bc04d17f1fdc049a96fb573e77293cdf5b53f9fd764f57e3342e8645d3ab570db1f3eedf19aedd29b3fda5e7e293a (root) ConsistencyTestingToolState / twin-wait-wreck-elite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 twin-ability-space-hamster 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1087636984456271537 /3 ritual-van-ensure-auto 4 StringLeaf 66 /4 control-draft-tuition-odor | |||||||||
| node4 | 1m 4.109s | 2025-09-24 15:18:02.192 | 866 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 4.109s | 2025-09-24 15:18:02.192 | 867 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 39 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 4.109s | 2025-09-24 15:18:02.192 | 868 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 1m 4.111s | 2025-09-24 15:18:02.194 | 869 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 1m 4.112s | 2025-09-24 15:18:02.195 | 870 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 66 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/66 {"round":66,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/66/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 1m 4.130s | 2025-09-24 15:18:02.213 | 816 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 66 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/66 | |
| node3 | 1m 4.131s | 2025-09-24 15:18:02.214 | 817 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node3 | 1m 4.214s | 2025-09-24 15:18:02.297 | 860 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/7 for round 66 | |
| node3 | 1m 4.216s | 2025-09-24 15:18:02.299 | 861 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 66 Timestamp: 2025-09-24T15:18:00.024109524Z Next consensus number: 1801 Legacy running event hash: dae048f3e62bd5f474c3d3d86670f3645b74bd26570b46577fa6d4ecd4200eae47f8bc52f4b095d180a4058f9500fd84 Legacy running event mnemonic: piano-bunker-deny-dose Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1566185131 Root hash: 6f5bc04d17f1fdc049a96fb573e77293cdf5b53f9fd764f57e3342e8645d3ab570db1f3eedf19aedd29b3fda5e7e293a (root) ConsistencyTestingToolState / twin-wait-wreck-elite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 twin-ability-space-hamster 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 1087636984456271537 /3 ritual-van-ensure-auto 4 StringLeaf 66 /4 control-draft-tuition-odor | |||||||||
| node3 | 1m 4.229s | 2025-09-24 15:18:02.312 | 862 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 4.229s | 2025-09-24 15:18:02.312 | 863 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 39 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 4.229s | 2025-09-24 15:18:02.312 | 864 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 1m 4.231s | 2025-09-24 15:18:02.314 | 865 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 1m 4.231s | 2025-09-24 15:18:02.314 | 866 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 66 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/66 {"round":66,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/66/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 2m 3.233s | 2025-09-24 15:19:01.316 | 1988 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 163 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 2m 3.278s | 2025-09-24 15:19:01.361 | 1924 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 163 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 2m 3.289s | 2025-09-24 15:19:01.372 | 1974 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 163 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 2m 3.348s | 2025-09-24 15:19:01.431 | 1982 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 163 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 2m 3.380s | 2025-09-24 15:19:01.463 | 1942 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 163 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 2m 3.643s | 2025-09-24 15:19:01.726 | 1955 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 163 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/163 | |
| node3 | 2m 3.644s | 2025-09-24 15:19:01.727 | 1956 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node2 | 2m 3.685s | 2025-09-24 15:19:01.768 | 1937 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 163 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/163 | |
| node2 | 2m 3.685s | 2025-09-24 15:19:01.768 | 1938 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node3 | 2m 3.728s | 2025-09-24 15:19:01.811 | 1987 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node3 | 2m 3.730s | 2025-09-24 15:19:01.813 | 1988 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 163 Timestamp: 2025-09-24T15:19:00.243515Z Next consensus number: 4307 Legacy running event hash: acdeb8ee2d881a649ee2821c74f32b187fa15b44db301b881f295b737604feedf1d0576353a3829727a47914b291886c Legacy running event mnemonic: gold-time-rain-erode Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1690611168 Root hash: 136a0f8ed117187e102aa96eea48eaf65141e5cae509b8b560e4ff0b1b349ef1a008762aa426fe87a8cffd9c9b996001 (root) ConsistencyTestingToolState / hood-vault-tag-average 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lottery-report-fall-fragile 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -7634847136242266068 /3 pumpkin-monitor-exchange-nurse 4 StringLeaf 163 /4 brother-riot-pass-tiger | |||||||||
| node3 | 2m 3.737s | 2025-09-24 15:19:01.820 | 1989 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 3.737s | 2025-09-24 15:19:01.820 | 1990 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 136 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 3.737s | 2025-09-24 15:19:01.820 | 1991 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 2m 3.740s | 2025-09-24 15:19:01.823 | 1992 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 2m 3.741s | 2025-09-24 15:19:01.824 | 1993 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 163 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/163 {"round":163,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/163/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 2m 3.743s | 2025-09-24 15:19:01.826 | 1987 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 163 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/163 | |
| node4 | 2m 3.744s | 2025-09-24 15:19:01.827 | 1988 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node1 | 2m 3.755s | 2025-09-24 15:19:01.838 | 1985 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 163 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/163 | |
| node1 | 2m 3.756s | 2025-09-24 15:19:01.839 | 1986 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node2 | 2m 3.783s | 2025-09-24 15:19:01.866 | 1973 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node2 | 2m 3.785s | 2025-09-24 15:19:01.868 | 1974 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 163 Timestamp: 2025-09-24T15:19:00.243515Z Next consensus number: 4307 Legacy running event hash: acdeb8ee2d881a649ee2821c74f32b187fa15b44db301b881f295b737604feedf1d0576353a3829727a47914b291886c Legacy running event mnemonic: gold-time-rain-erode Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1690611168 Root hash: 136a0f8ed117187e102aa96eea48eaf65141e5cae509b8b560e4ff0b1b349ef1a008762aa426fe87a8cffd9c9b996001 (root) ConsistencyTestingToolState / hood-vault-tag-average 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lottery-report-fall-fragile 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -7634847136242266068 /3 pumpkin-monitor-exchange-nurse 4 StringLeaf 163 /4 brother-riot-pass-tiger | |||||||||
| node2 | 2m 3.794s | 2025-09-24 15:19:01.877 | 1975 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 3.794s | 2025-09-24 15:19:01.877 | 1976 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 136 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 3.794s | 2025-09-24 15:19:01.877 | 1977 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 2m 3.798s | 2025-09-24 15:19:01.881 | 1978 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 2m 3.798s | 2025-09-24 15:19:01.881 | 1979 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 163 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/163 {"round":163,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/163/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 2m 3.832s | 2025-09-24 15:19:01.915 | 1991 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 163 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/163 | |
| node0 | 2m 3.833s | 2025-09-24 15:19:01.916 | 1992 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node4 | 2m 3.841s | 2025-09-24 15:19:01.924 | 2023 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node4 | 2m 3.843s | 2025-09-24 15:19:01.926 | 2024 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 163 Timestamp: 2025-09-24T15:19:00.243515Z Next consensus number: 4307 Legacy running event hash: acdeb8ee2d881a649ee2821c74f32b187fa15b44db301b881f295b737604feedf1d0576353a3829727a47914b291886c Legacy running event mnemonic: gold-time-rain-erode Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1690611168 Root hash: 136a0f8ed117187e102aa96eea48eaf65141e5cae509b8b560e4ff0b1b349ef1a008762aa426fe87a8cffd9c9b996001 (root) ConsistencyTestingToolState / hood-vault-tag-average 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lottery-report-fall-fragile 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -7634847136242266068 /3 pumpkin-monitor-exchange-nurse 4 StringLeaf 163 /4 brother-riot-pass-tiger | |||||||||
| node1 | 2m 3.849s | 2025-09-24 15:19:01.932 | 2017 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node1 | 2m 3.851s | 2025-09-24 15:19:01.934 | 2018 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 163 Timestamp: 2025-09-24T15:19:00.243515Z Next consensus number: 4307 Legacy running event hash: acdeb8ee2d881a649ee2821c74f32b187fa15b44db301b881f295b737604feedf1d0576353a3829727a47914b291886c Legacy running event mnemonic: gold-time-rain-erode Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1690611168 Root hash: 136a0f8ed117187e102aa96eea48eaf65141e5cae509b8b560e4ff0b1b349ef1a008762aa426fe87a8cffd9c9b996001 (root) ConsistencyTestingToolState / hood-vault-tag-average 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lottery-report-fall-fragile 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -7634847136242266068 /3 pumpkin-monitor-exchange-nurse 4 StringLeaf 163 /4 brother-riot-pass-tiger | |||||||||
| node4 | 2m 3.852s | 2025-09-24 15:19:01.935 | 2025 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 3.852s | 2025-09-24 15:19:01.935 | 2026 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 136 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 3.852s | 2025-09-24 15:19:01.935 | 2027 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 2m 3.855s | 2025-09-24 15:19:01.938 | 2028 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 2m 3.856s | 2025-09-24 15:19:01.939 | 2029 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 163 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/163 {"round":163,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/163/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 2m 3.861s | 2025-09-24 15:19:01.944 | 2019 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 3.861s | 2025-09-24 15:19:01.944 | 2020 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 136 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 3.861s | 2025-09-24 15:19:01.944 | 2021 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 2m 3.865s | 2025-09-24 15:19:01.948 | 2022 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 2m 3.866s | 2025-09-24 15:19:01.949 | 2023 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 163 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/163 {"round":163,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/163/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 2m 3.942s | 2025-09-24 15:19:02.025 | 2035 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/13 for round 163 | |
| node0 | 2m 3.944s | 2025-09-24 15:19:02.027 | 2036 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 163 Timestamp: 2025-09-24T15:19:00.243515Z Next consensus number: 4307 Legacy running event hash: acdeb8ee2d881a649ee2821c74f32b187fa15b44db301b881f295b737604feedf1d0576353a3829727a47914b291886c Legacy running event mnemonic: gold-time-rain-erode Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1690611168 Root hash: 136a0f8ed117187e102aa96eea48eaf65141e5cae509b8b560e4ff0b1b349ef1a008762aa426fe87a8cffd9c9b996001 (root) ConsistencyTestingToolState / hood-vault-tag-average 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lottery-report-fall-fragile 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -7634847136242266068 /3 pumpkin-monitor-exchange-nurse 4 StringLeaf 163 /4 brother-riot-pass-tiger | |||||||||
| node0 | 2m 3.954s | 2025-09-24 15:19:02.037 | 2037 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 2m 3.955s | 2025-09-24 15:19:02.038 | 2038 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 136 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 2m 3.955s | 2025-09-24 15:19:02.038 | 2039 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 2m 3.958s | 2025-09-24 15:19:02.041 | 2040 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 2m 3.959s | 2025-09-24 15:19:02.042 | 2041 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 163 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/163 {"round":163,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/163/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 3m 4.057s | 2025-09-24 15:20:02.140 | 3242 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 269 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 3m 4.124s | 2025-09-24 15:20:02.207 | 3210 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 269 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 3m 4.189s | 2025-09-24 15:20:02.272 | 3158 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 269 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 3m 4.255s | 2025-09-24 15:20:02.338 | 3198 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 269 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 3m 4.259s | 2025-09-24 15:20:02.342 | 3214 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 269 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 3m 4.422s | 2025-09-24 15:20:02.505 | 3204 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 269 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/269 | |
| node3 | 3m 4.423s | 2025-09-24 15:20:02.506 | 3205 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node2 | 3m 4.467s | 2025-09-24 15:20:02.550 | 3164 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 269 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/269 | |
| node2 | 3m 4.468s | 2025-09-24 15:20:02.551 | 3165 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node3 | 3m 4.510s | 2025-09-24 15:20:02.593 | 3241 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node3 | 3m 4.512s | 2025-09-24 15:20:02.595 | 3242 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 269 Timestamp: 2025-09-24T15:20:00.434072Z Next consensus number: 6839 Legacy running event hash: 3e3bd3d33277e1faee5a2fb56e9c35ecf878f28a3d3ac30ebc32936aa719fc9b37a352c8917feff1dfd1b42e317d6c63 Legacy running event mnemonic: shuffle-mad-exclude-school Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1116739942 Root hash: 230bf847f0e20c395357844fdc241be2bcf29e0a113afe44dea2e74ee16739af107340746e152f20aa1569e6602572ed (root) ConsistencyTestingToolState / gesture-warm-toilet-mango 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shop-snow-drum-ginger 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7005339615738190496 /3 buddy-tornado-hockey-quick 4 StringLeaf 269 /4 convince-example-flush-bicycle | |||||||||
| node3 | 3m 4.519s | 2025-09-24 15:20:02.602 | 3243 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 4.519s | 2025-09-24 15:20:02.602 | 3244 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 242 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 4.519s | 2025-09-24 15:20:02.602 | 3245 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 3m 4.524s | 2025-09-24 15:20:02.607 | 3246 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 3m 4.525s | 2025-09-24 15:20:02.608 | 3247 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 269 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/269 {"round":269,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/269/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 3m 4.538s | 2025-09-24 15:20:02.621 | 3216 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 269 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/269 | |
| node0 | 3m 4.539s | 2025-09-24 15:20:02.622 | 3217 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node2 | 3m 4.558s | 2025-09-24 15:20:02.641 | 3205 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node2 | 3m 4.560s | 2025-09-24 15:20:02.643 | 3206 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 269 Timestamp: 2025-09-24T15:20:00.434072Z Next consensus number: 6839 Legacy running event hash: 3e3bd3d33277e1faee5a2fb56e9c35ecf878f28a3d3ac30ebc32936aa719fc9b37a352c8917feff1dfd1b42e317d6c63 Legacy running event mnemonic: shuffle-mad-exclude-school Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1116739942 Root hash: 230bf847f0e20c395357844fdc241be2bcf29e0a113afe44dea2e74ee16739af107340746e152f20aa1569e6602572ed (root) ConsistencyTestingToolState / gesture-warm-toilet-mango 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shop-snow-drum-ginger 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7005339615738190496 /3 buddy-tornado-hockey-quick 4 StringLeaf 269 /4 convince-example-flush-bicycle | |||||||||
| node2 | 3m 4.566s | 2025-09-24 15:20:02.649 | 3207 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 4.567s | 2025-09-24 15:20:02.650 | 3208 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 242 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 4.567s | 2025-09-24 15:20:02.650 | 3209 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 3m 4.572s | 2025-09-24 15:20:02.655 | 3210 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 3m 4.572s | 2025-09-24 15:20:02.655 | 3211 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 269 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/269 {"round":269,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/269/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 3m 4.650s | 2025-09-24 15:20:02.733 | 3253 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node0 | 3m 4.654s | 2025-09-24 15:20:02.737 | 3254 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 269 Timestamp: 2025-09-24T15:20:00.434072Z Next consensus number: 6839 Legacy running event hash: 3e3bd3d33277e1faee5a2fb56e9c35ecf878f28a3d3ac30ebc32936aa719fc9b37a352c8917feff1dfd1b42e317d6c63 Legacy running event mnemonic: shuffle-mad-exclude-school Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1116739942 Root hash: 230bf847f0e20c395357844fdc241be2bcf29e0a113afe44dea2e74ee16739af107340746e152f20aa1569e6602572ed (root) ConsistencyTestingToolState / gesture-warm-toilet-mango 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shop-snow-drum-ginger 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7005339615738190496 /3 buddy-tornado-hockey-quick 4 StringLeaf 269 /4 convince-example-flush-bicycle | |||||||||
| node0 | 3m 4.665s | 2025-09-24 15:20:02.748 | 3255 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 4.665s | 2025-09-24 15:20:02.748 | 3264 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 242 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 4.666s | 2025-09-24 15:20:02.749 | 3265 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 3m 4.672s | 2025-09-24 15:20:02.755 | 3266 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 3m 4.673s | 2025-09-24 15:20:02.756 | 3267 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 269 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/269 {"round":269,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/269/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 3m 4.753s | 2025-09-24 15:20:02.836 | 3248 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 269 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/269 | |
| node4 | 3m 4.753s | 2025-09-24 15:20:02.836 | 3220 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 269 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/269 | |
| node1 | 3m 4.754s | 2025-09-24 15:20:02.837 | 3249 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node4 | 3m 4.754s | 2025-09-24 15:20:02.837 | 3221 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node1 | 3m 4.843s | 2025-09-24 15:20:02.926 | 3285 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node1 | 3m 4.845s | 2025-09-24 15:20:02.928 | 3286 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 269 Timestamp: 2025-09-24T15:20:00.434072Z Next consensus number: 6839 Legacy running event hash: 3e3bd3d33277e1faee5a2fb56e9c35ecf878f28a3d3ac30ebc32936aa719fc9b37a352c8917feff1dfd1b42e317d6c63 Legacy running event mnemonic: shuffle-mad-exclude-school Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1116739942 Root hash: 230bf847f0e20c395357844fdc241be2bcf29e0a113afe44dea2e74ee16739af107340746e152f20aa1569e6602572ed (root) ConsistencyTestingToolState / gesture-warm-toilet-mango 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shop-snow-drum-ginger 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7005339615738190496 /3 buddy-tornado-hockey-quick 4 StringLeaf 269 /4 convince-example-flush-bicycle | |||||||||
| node4 | 3m 4.848s | 2025-09-24 15:20:02.931 | 3261 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/19 for round 269 | |
| node4 | 3m 4.850s | 2025-09-24 15:20:02.933 | 3262 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 269 Timestamp: 2025-09-24T15:20:00.434072Z Next consensus number: 6839 Legacy running event hash: 3e3bd3d33277e1faee5a2fb56e9c35ecf878f28a3d3ac30ebc32936aa719fc9b37a352c8917feff1dfd1b42e317d6c63 Legacy running event mnemonic: shuffle-mad-exclude-school Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1116739942 Root hash: 230bf847f0e20c395357844fdc241be2bcf29e0a113afe44dea2e74ee16739af107340746e152f20aa1569e6602572ed (root) ConsistencyTestingToolState / gesture-warm-toilet-mango 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shop-snow-drum-ginger 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7005339615738190496 /3 buddy-tornado-hockey-quick 4 StringLeaf 269 /4 convince-example-flush-bicycle | |||||||||
| node1 | 3m 4.851s | 2025-09-24 15:20:02.934 | 3287 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 4.852s | 2025-09-24 15:20:02.935 | 3288 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 242 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 4.852s | 2025-09-24 15:20:02.935 | 3289 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 3m 4.857s | 2025-09-24 15:20:02.940 | 3290 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 3m 4.857s | 2025-09-24 15:20:02.940 | 3291 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 269 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/269 {"round":269,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/269/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 3m 4.858s | 2025-09-24 15:20:02.941 | 3263 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 3m 4.858s | 2025-09-24 15:20:02.941 | 3264 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 242 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 3m 4.858s | 2025-09-24 15:20:02.941 | 3265 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 3m 4.863s | 2025-09-24 15:20:02.946 | 3266 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 3m 4.864s | 2025-09-24 15:20:02.947 | 3267 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 269 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/269 {"round":269,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/269/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 3m 12.253s | 2025-09-24 15:20:10.336 | 3328 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 2 to 4>> | NetworkUtils: | Connection broken: 2 -> 4 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:20:10.334522787Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:20:10.334522787Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:241) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:201) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:278) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 12 more | |||||||||
| node1 | 3m 12.254s | 2025-09-24 15:20:10.337 | 3414 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 1 to 4>> | NetworkUtils: | Connection broken: 1 -> 4 | |
| java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.heartbeats.HeartbeatPeerProtocol.initiateHeartbeat(HeartbeatPeerProtocol.java:112) at com.swirlds.platform.heartbeats.HeartbeatPeerProtocol.runProtocol(HeartbeatPeerProtocol.java:156) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) | |||||||||
| node3 | 3m 12.254s | 2025-09-24 15:20:10.337 | 3368 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 3 to 4>> | NetworkUtils: | Connection broken: 3 -> 4 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:20:10.335010208Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:20:10.335010208Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:180) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readBoolean(DataInputStream.java:255) at org.hiero.base.io.streams.AugmentedDataInputStream.readBoolean(AugmentedDataInputStream.java:137) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readMyTipsTheyHave$7(SyncUtils.java:163) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 11 more | |||||||||
| node0 | 3m 12.257s | 2025-09-24 15:20:10.340 | 3382 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 0 to 4>> | NetworkUtils: | Connection broken: 0 -> 4 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:20:10.335703218Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:20:10.335703218Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readLong(DataInputStream.java:407) at org.hiero.base.io.streams.AugmentedDataInputStream.readLong(AugmentedDataInputStream.java:186) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.deserializeEventWindow(SyncUtils.java:640) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readTheirTipsAndEventWindow$3(SyncUtils.java:104) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 11 more | |||||||||
| node3 | 4m 3.870s | 2025-09-24 15:21:01.953 | 4318 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 364 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 4m 3.925s | 2025-09-24 15:21:02.008 | 4316 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 364 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 4m 3.927s | 2025-09-24 15:21:02.010 | 4360 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 364 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 4m 3.999s | 2025-09-24 15:21:02.082 | 4264 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 364 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 4m 4.334s | 2025-09-24 15:21:02.417 | 4363 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 364 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/364 | |
| node1 | 4m 4.335s | 2025-09-24 15:21:02.418 | 4364 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 364 | |
| node0 | 4m 4.355s | 2025-09-24 15:21:02.438 | 4319 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 364 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/364 | |
| node0 | 4m 4.356s | 2025-09-24 15:21:02.439 | 4320 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 364 | |
| node2 | 4m 4.405s | 2025-09-24 15:21:02.488 | 4267 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 364 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/364 | |
| node2 | 4m 4.405s | 2025-09-24 15:21:02.488 | 4268 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 364 | |
| node1 | 4m 4.424s | 2025-09-24 15:21:02.507 | 4395 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 364 | |
| node1 | 4m 4.425s | 2025-09-24 15:21:02.508 | 4396 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 364 Timestamp: 2025-09-24T15:21:00.528638241Z Next consensus number: 8549 Legacy running event hash: f5e7bb2381ed47451f5623a5991332925c9cef3de94ede75427dee70da76cdbfa2681708ce08fd97a3b865190a7478a1 Legacy running event mnemonic: drip-option-second-trim Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2072500243 Root hash: 80830674e785f90c3b9d1e0bec4d1f50db7af1cd1f0e8c8012cd992df527e0703e32d27773d79ce6088a1560f4878e6e (root) ConsistencyTestingToolState / use-soldier-author-teach 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lonely-hip-latin-mass 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7593879030459194058 /3 cram-warrior-kitten-popular 4 StringLeaf 364 /4 wheat-video-effort-venture | |||||||||
| node1 | 4m 4.432s | 2025-09-24 15:21:02.515 | 4397 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 4m 4.433s | 2025-09-24 15:21:02.516 | 4398 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 337 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 4m 4.433s | 2025-09-24 15:21:02.516 | 4399 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 4m 4.439s | 2025-09-24 15:21:02.522 | 4400 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 4m 4.439s | 2025-09-24 15:21:02.522 | 4401 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 364 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/364 {"round":364,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/364/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 4m 4.465s | 2025-09-24 15:21:02.548 | 4351 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 364 | |
| node0 | 4m 4.467s | 2025-09-24 15:21:02.550 | 4360 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 364 Timestamp: 2025-09-24T15:21:00.528638241Z Next consensus number: 8549 Legacy running event hash: f5e7bb2381ed47451f5623a5991332925c9cef3de94ede75427dee70da76cdbfa2681708ce08fd97a3b865190a7478a1 Legacy running event mnemonic: drip-option-second-trim Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2072500243 Root hash: 80830674e785f90c3b9d1e0bec4d1f50db7af1cd1f0e8c8012cd992df527e0703e32d27773d79ce6088a1560f4878e6e (root) ConsistencyTestingToolState / use-soldier-author-teach 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lonely-hip-latin-mass 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7593879030459194058 /3 cram-warrior-kitten-popular 4 StringLeaf 364 /4 wheat-video-effort-venture | |||||||||
| node0 | 4m 4.476s | 2025-09-24 15:21:02.559 | 4361 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 4m 4.476s | 2025-09-24 15:21:02.559 | 4362 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 337 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 4m 4.476s | 2025-09-24 15:21:02.559 | 4363 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 4m 4.483s | 2025-09-24 15:21:02.566 | 4364 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 4m 4.484s | 2025-09-24 15:21:02.567 | 4365 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 364 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/364 {"round":364,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/364/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 4m 4.497s | 2025-09-24 15:21:02.580 | 4299 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 364 | |
| node2 | 4m 4.499s | 2025-09-24 15:21:02.582 | 4300 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 364 Timestamp: 2025-09-24T15:21:00.528638241Z Next consensus number: 8549 Legacy running event hash: f5e7bb2381ed47451f5623a5991332925c9cef3de94ede75427dee70da76cdbfa2681708ce08fd97a3b865190a7478a1 Legacy running event mnemonic: drip-option-second-trim Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2072500243 Root hash: 80830674e785f90c3b9d1e0bec4d1f50db7af1cd1f0e8c8012cd992df527e0703e32d27773d79ce6088a1560f4878e6e (root) ConsistencyTestingToolState / use-soldier-author-teach 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lonely-hip-latin-mass 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7593879030459194058 /3 cram-warrior-kitten-popular 4 StringLeaf 364 /4 wheat-video-effort-venture | |||||||||
| node2 | 4m 4.509s | 2025-09-24 15:21:02.592 | 4301 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 4m 4.509s | 2025-09-24 15:21:02.592 | 4302 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 337 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 4m 4.509s | 2025-09-24 15:21:02.592 | 4303 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 4m 4.516s | 2025-09-24 15:21:02.599 | 4312 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 4m 4.516s | 2025-09-24 15:21:02.599 | 4313 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 364 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/364 {"round":364,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/364/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 4m 4.706s | 2025-09-24 15:21:02.789 | 4331 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 364 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/364 | |
| node3 | 4m 4.707s | 2025-09-24 15:21:02.790 | 4332 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 364 | |
| node3 | 4m 4.810s | 2025-09-24 15:21:02.893 | 4375 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/25 for round 364 | |
| node3 | 4m 4.812s | 2025-09-24 15:21:02.895 | 4376 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 364 Timestamp: 2025-09-24T15:21:00.528638241Z Next consensus number: 8549 Legacy running event hash: f5e7bb2381ed47451f5623a5991332925c9cef3de94ede75427dee70da76cdbfa2681708ce08fd97a3b865190a7478a1 Legacy running event mnemonic: drip-option-second-trim Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2072500243 Root hash: 80830674e785f90c3b9d1e0bec4d1f50db7af1cd1f0e8c8012cd992df527e0703e32d27773d79ce6088a1560f4878e6e (root) ConsistencyTestingToolState / use-soldier-author-teach 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 lonely-hip-latin-mass 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7593879030459194058 /3 cram-warrior-kitten-popular 4 StringLeaf 364 /4 wheat-video-effort-venture | |||||||||
| node3 | 4m 4.818s | 2025-09-24 15:21:02.901 | 4377 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 4m 4.819s | 2025-09-24 15:21:02.902 | 4378 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 337 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 4m 4.819s | 2025-09-24 15:21:02.902 | 4379 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 4m 4.825s | 2025-09-24 15:21:02.908 | 4380 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 4m 4.826s | 2025-09-24 15:21:02.909 | 4381 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 364 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/364 {"round":364,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/364/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 5m 3.883s | 2025-09-24 15:22:01.966 | 5340 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 448 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 5m 3.956s | 2025-09-24 15:22:02.039 | 5398 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 448 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 5m 3.990s | 2025-09-24 15:22:02.073 | 5342 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 448 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 5m 4.012s | 2025-09-24 15:22:02.095 | 5238 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 448 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 5m 4.367s | 2025-09-24 15:22:02.450 | 5343 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 448 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/448 | |
| node0 | 5m 4.368s | 2025-09-24 15:22:02.451 | 5344 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 448 | |
| node3 | 5m 4.397s | 2025-09-24 15:22:02.480 | 5345 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 448 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/448 | |
| node3 | 5m 4.398s | 2025-09-24 15:22:02.481 | 5346 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 448 | |
| node2 | 5m 4.435s | 2025-09-24 15:22:02.518 | 5241 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 448 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/448 | |
| node2 | 5m 4.436s | 2025-09-24 15:22:02.519 | 5242 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 448 | |
| node0 | 5m 4.464s | 2025-09-24 15:22:02.547 | 5375 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 448 | |
| node0 | 5m 4.467s | 2025-09-24 15:22:02.550 | 5384 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 448 Timestamp: 2025-09-24T15:22:00.656363Z Next consensus number: 10070 Legacy running event hash: 7f0e192d83dd9a92ccd9c7ad37a4952aad077572c6763218f7174d7c130b438c5fd747c6c99683a794987942736a9996 Legacy running event mnemonic: tide-certain-love-punch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1257187961 Root hash: be50d205876cfe7e557f0917f8dfe0bf72d00de3e716f3345cdf5584879fa83b712dfed317239a22e7cd809b59e4034b (root) ConsistencyTestingToolState / current-satoshi-barely-core 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 caution-nature-mirror-kangaroo 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -1995101235906102677 /3 ride-borrow-anchor-elephant 4 StringLeaf 448 /4 surround-save-reveal-satoshi | |||||||||
| node1 | 5m 4.467s | 2025-09-24 15:22:02.550 | 5411 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 448 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/448 | |
| node1 | 5m 4.468s | 2025-09-24 15:22:02.551 | 5412 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 448 | |
| node0 | 5m 4.474s | 2025-09-24 15:22:02.557 | 5385 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 5m 4.474s | 2025-09-24 15:22:02.557 | 5386 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 421 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 5m 4.474s | 2025-09-24 15:22:02.557 | 5387 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 5m 4.482s | 2025-09-24 15:22:02.565 | 5388 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 5m 4.482s | 2025-09-24 15:22:02.565 | 5389 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 448 | |
| node0 | 5m 4.483s | 2025-09-24 15:22:02.566 | 5389 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 448 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/448 {"round":448,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/448/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 5m 4.484s | 2025-09-24 15:22:02.567 | 5390 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 | |
| node3 | 5m 4.484s | 2025-09-24 15:22:02.567 | 5390 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 448 Timestamp: 2025-09-24T15:22:00.656363Z Next consensus number: 10070 Legacy running event hash: 7f0e192d83dd9a92ccd9c7ad37a4952aad077572c6763218f7174d7c130b438c5fd747c6c99683a794987942736a9996 Legacy running event mnemonic: tide-certain-love-punch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1257187961 Root hash: be50d205876cfe7e557f0917f8dfe0bf72d00de3e716f3345cdf5584879fa83b712dfed317239a22e7cd809b59e4034b (root) ConsistencyTestingToolState / current-satoshi-barely-core 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 caution-nature-mirror-kangaroo 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -1995101235906102677 /3 ride-borrow-anchor-elephant 4 StringLeaf 448 /4 surround-save-reveal-satoshi | |||||||||
| node3 | 5m 4.490s | 2025-09-24 15:22:02.573 | 5391 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 5m 4.490s | 2025-09-24 15:22:02.573 | 5392 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 421 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 5m 4.490s | 2025-09-24 15:22:02.573 | 5393 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 5m 4.498s | 2025-09-24 15:22:02.581 | 5394 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 5m 4.498s | 2025-09-24 15:22:02.581 | 5395 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 448 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/448 {"round":448,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/448/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 5m 4.500s | 2025-09-24 15:22:02.583 | 5396 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 | |
| node2 | 5m 4.533s | 2025-09-24 15:22:02.616 | 5277 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 448 | |
| node2 | 5m 4.535s | 2025-09-24 15:22:02.618 | 5278 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 448 Timestamp: 2025-09-24T15:22:00.656363Z Next consensus number: 10070 Legacy running event hash: 7f0e192d83dd9a92ccd9c7ad37a4952aad077572c6763218f7174d7c130b438c5fd747c6c99683a794987942736a9996 Legacy running event mnemonic: tide-certain-love-punch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1257187961 Root hash: be50d205876cfe7e557f0917f8dfe0bf72d00de3e716f3345cdf5584879fa83b712dfed317239a22e7cd809b59e4034b (root) ConsistencyTestingToolState / current-satoshi-barely-core 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 caution-nature-mirror-kangaroo 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -1995101235906102677 /3 ride-borrow-anchor-elephant 4 StringLeaf 448 /4 surround-save-reveal-satoshi | |||||||||
| node2 | 5m 4.543s | 2025-09-24 15:22:02.626 | 5279 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 5m 4.543s | 2025-09-24 15:22:02.626 | 5280 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 421 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 5m 4.543s | 2025-09-24 15:22:02.626 | 5281 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 5m 4.551s | 2025-09-24 15:22:02.634 | 5455 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/31 for round 448 | |
| node2 | 5m 4.551s | 2025-09-24 15:22:02.634 | 5282 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 5m 4.552s | 2025-09-24 15:22:02.635 | 5283 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 448 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/448 {"round":448,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/448/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 5m 4.553s | 2025-09-24 15:22:02.636 | 5456 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 448 Timestamp: 2025-09-24T15:22:00.656363Z Next consensus number: 10070 Legacy running event hash: 7f0e192d83dd9a92ccd9c7ad37a4952aad077572c6763218f7174d7c130b438c5fd747c6c99683a794987942736a9996 Legacy running event mnemonic: tide-certain-love-punch Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1257187961 Root hash: be50d205876cfe7e557f0917f8dfe0bf72d00de3e716f3345cdf5584879fa83b712dfed317239a22e7cd809b59e4034b (root) ConsistencyTestingToolState / current-satoshi-barely-core 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 caution-nature-mirror-kangaroo 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -1995101235906102677 /3 ride-borrow-anchor-elephant 4 StringLeaf 448 /4 surround-save-reveal-satoshi | |||||||||
| node2 | 5m 4.554s | 2025-09-24 15:22:02.637 | 5284 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 | |
| node1 | 5m 4.559s | 2025-09-24 15:22:02.642 | 5457 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 5m 4.560s | 2025-09-24 15:22:02.643 | 5458 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 421 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 5m 4.560s | 2025-09-24 15:22:02.643 | 5459 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 5m 4.567s | 2025-09-24 15:22:02.650 | 5460 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 5m 4.568s | 2025-09-24 15:22:02.651 | 5461 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 448 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/448 {"round":448,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/448/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 5m 4.570s | 2025-09-24 15:22:02.653 | 5462 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 | |
| node4 | 5m 51.814s | 2025-09-24 15:22:49.897 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node4 | 5m 51.914s | 2025-09-24 15:22:49.997 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node4 | 5m 51.933s | 2025-09-24 15:22:50.016 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 5m 52.065s | 2025-09-24 15:22:50.148 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 5m 52.072s | 2025-09-24 15:22:50.155 | 5 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | Registering ConsistencyTestingToolState with ConstructableRegistry | |
| node4 | 5m 52.086s | 2025-09-24 15:22:50.169 | 6 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 5m 52.537s | 2025-09-24 15:22:50.620 | 9 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | ConsistencyTestingToolState is registered with ConstructableRegistry | |
| node4 | 5m 52.537s | 2025-09-24 15:22:50.620 | 10 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node4 | 5m 53.601s | 2025-09-24 15:22:51.684 | 11 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1063ms | |
| node4 | 5m 53.609s | 2025-09-24 15:22:51.692 | 12 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 5m 53.612s | 2025-09-24 15:22:51.695 | 13 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 5m 53.654s | 2025-09-24 15:22:51.737 | 14 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 5m 53.720s | 2025-09-24 15:22:51.803 | 15 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 5m 53.721s | 2025-09-24 15:22:51.804 | 16 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 5m 55.815s | 2025-09-24 15:22:53.898 | 17 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 5m 55.905s | 2025-09-24 15:22:53.988 | 20 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 55.912s | 2025-09-24 15:22:53.995 | 21 | INFO | STARTUP | <main> | StartupStateUtils: | The following saved states were found on disk: | |
| - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/269/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/163/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/66/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/SignedState.swh | |||||||||
| node4 | 5m 55.913s | 2025-09-24 15:22:53.996 | 22 | INFO | STARTUP | <main> | StartupStateUtils: | Loading latest state from disk. | |
| node4 | 5m 55.913s | 2025-09-24 15:22:53.996 | 23 | INFO | STARTUP | <main> | StartupStateUtils: | Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/269/SignedState.swh | |
| node4 | 5m 55.918s | 2025-09-24 15:22:54.001 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 5m 55.922s | 2025-09-24 15:22:54.005 | 25 | INFO | STATE_TO_DISK | <main> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp | |
| node4 | 5m 56.049s | 2025-09-24 15:22:54.132 | 36 | INFO | STARTUP | <main> | StartupStateUtils: | Loaded state's hash is the same as when it was saved. | |
| node4 | 5m 56.052s | 2025-09-24 15:22:54.135 | 37 | INFO | STARTUP | <main> | StartupStateUtils: | Platform has loaded a saved state {"round":269,"consensusTimestamp":"2025-09-24T15:20:00.434072Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload] | |
| node4 | 5m 56.054s | 2025-09-24 15:22:54.137 | 40 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 56.056s | 2025-09-24 15:22:54.139 | 43 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 5m 56.058s | 2025-09-24 15:22:54.141 | 44 | INFO | STARTUP | <main> | AddressBookInitializer: | Using the loaded state's address book and weight values. | |
| node4 | 5m 56.064s | 2025-09-24 15:22:54.147 | 45 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 56.066s | 2025-09-24 15:22:54.149 | 46 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 57.095s | 2025-09-24 15:22:55.178 | 47 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26365084] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=75500, randomLong=8812426293917010976, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=7270, randomLong=7121686447365774372, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1000850, data=35, exception=null] OS Health Check Report - Complete (took 1017 ms) | |||||||||
| node4 | 5m 57.122s | 2025-09-24 15:22:55.205 | 48 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 5m 57.211s | 2025-09-24 15:22:55.294 | 49 | INFO | STARTUP | <main> | PcesUtilities: | Span compaction completed for data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 281 | |
| node4 | 5m 57.213s | 2025-09-24 15:22:55.296 | 50 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 5m 57.218s | 2025-09-24 15:22:55.301 | 51 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 5m 57.290s | 2025-09-24 15:22:55.373 | 52 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IjqFwA==", "port": 30124 }, { "ipAddressV4": "CoAANA==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "IjksRQ==", "port": 30125 }, { "ipAddressV4": "CoAAMw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iodj+Q==", "port": 30126 }, { "ipAddressV4": "CoAAMQ==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IkIKcw==", "port": 30127 }, { "ipAddressV4": "CoAAMg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IimGKA==", "port": 30128 }, { "ipAddressV4": "CoAANQ==", "port": 30128 }] }] } | |||||||||
| node4 | 5m 57.309s | 2025-09-24 15:22:55.392 | 53 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with state long 7005339615738190496. | |
| node4 | 5m 57.310s | 2025-09-24 15:22:55.393 | 54 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with 269 rounds handled. | |
| node4 | 5m 57.310s | 2025-09-24 15:22:55.393 | 55 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 5m 57.310s | 2025-09-24 15:22:55.393 | 56 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 5m 58.075s | 2025-09-24 15:22:56.158 | 57 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 269 Timestamp: 2025-09-24T15:20:00.434072Z Next consensus number: 6839 Legacy running event hash: 3e3bd3d33277e1faee5a2fb56e9c35ecf878f28a3d3ac30ebc32936aa719fc9b37a352c8917feff1dfd1b42e317d6c63 Legacy running event mnemonic: shuffle-mad-exclude-school Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1116739942 Root hash: 230bf847f0e20c395357844fdc241be2bcf29e0a113afe44dea2e74ee16739af107340746e152f20aa1569e6602572ed (root) ConsistencyTestingToolState / gesture-warm-toilet-mango 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 shop-snow-drum-ginger 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 7005339615738190496 /3 buddy-tornado-hockey-quick 4 StringLeaf 269 /4 convince-example-flush-bicycle | |||||||||
| node4 | 5m 58.331s | 2025-09-24 15:22:56.414 | 59 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 3e3bd3d33277e1faee5a2fb56e9c35ecf878f28a3d3ac30ebc32936aa719fc9b37a352c8917feff1dfd1b42e317d6c63 | |
| node4 | 5m 58.344s | 2025-09-24 15:22:56.427 | 60 | INFO | STARTUP | <platformForkJoinThread-4> | Shadowgraph: | Shadowgraph starting from expiration threshold 242 | |
| node4 | 5m 58.351s | 2025-09-24 15:22:56.434 | 62 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 5m 58.353s | 2025-09-24 15:22:56.436 | 63 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 5m 58.354s | 2025-09-24 15:22:56.437 | 64 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 5m 58.358s | 2025-09-24 15:22:56.441 | 65 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 5m 58.359s | 2025-09-24 15:22:56.442 | 66 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 5m 58.360s | 2025-09-24 15:22:56.443 | 67 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 5m 58.362s | 2025-09-24 15:22:56.445 | 68 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 242 | |
| node4 | 5m 58.367s | 2025-09-24 15:22:56.450 | 69 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | DefaultStatusStateMachine: | Platform spent 194.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 5m 58.579s | 2025-09-24 15:22:56.662 | 70 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:4 H:6d0d1edee512 BR:267), num remaining: 4 | |
| node4 | 5m 58.581s | 2025-09-24 15:22:56.664 | 71 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:4e43ac8312eb BR:267), num remaining: 3 | |
| node4 | 5m 58.581s | 2025-09-24 15:22:56.664 | 72 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:2c519d6c54e0 BR:267), num remaining: 2 | |
| node4 | 5m 58.582s | 2025-09-24 15:22:56.665 | 73 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:772352f4b430 BR:267), num remaining: 1 | |
| node4 | 5m 58.583s | 2025-09-24 15:22:56.666 | 74 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:4dca11cf0a1c BR:267), num remaining: 0 | |
| node4 | 5m 58.664s | 2025-09-24 15:22:56.747 | 109 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 1,082 preconsensus events with max birth round 281. These events contained 2,758 transactions. 11 rounds reached consensus spanning 7.5 seconds of consensus time. The latest round to reach consensus is round 280. Replay took 301.0 milliseconds. | |
| node4 | 5m 58.668s | 2025-09-24 15:22:56.751 | 111 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 5m 58.669s | 2025-09-24 15:22:56.752 | 112 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 298.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node4 | 5m 59.490s | 2025-09-24 15:22:57.573 | 177 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | DefaultStatusStateMachine: | Platform spent 820.0 ms in OBSERVING. Now in BEHIND | |
| node4 | 5m 59.491s | 2025-09-24 15:22:57.574 | 178 | INFO | RECONNECT | <platformForkJoinThread-6> | ReconnectController: | Starting ReconnectController | |
| node4 | 5m 59.492s | 2025-09-24 15:22:57.575 | 179 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Preparing for reconnect, stopping gossip | |
| node4 | 5m 59.543s | 2025-09-24 15:22:57.626 | 180 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Preparing for reconnect, start clearing queues | |
| node4 | 5m 59.544s | 2025-09-24 15:22:57.627 | 181 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectPlatformHelperImpl: | Queues have been cleared | |
| node4 | 5m 59.545s | 2025-09-24 15:22:57.628 | 182 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectController: | waiting for reconnect connection | |
| node4 | 5m 59.545s | 2025-09-24 15:22:57.628 | 183 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectController: | acquired reconnect connection | |
| node2 | 5m 59.731s | 2025-09-24 15:22:57.814 | 6239 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Starting reconnect in the role of the sender {"receiving":false,"nodeId":2,"otherNodeId":4,"round":531} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node2 | 5m 59.732s | 2025-09-24 15:22:57.815 | 6240 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | The following state will be sent to the learner: | |
| Round: 531 Timestamp: 2025-09-24T15:22:56.005029488Z Next consensus number: 11467 Legacy running event hash: 2f31a60b7f99dc1eeada6300bbfa205d8cb7de969e2ffbb16ceb308b93a289eef59119f20fcc3b064fbb12ebaf10eb1b Legacy running event mnemonic: tower-dirt-veteran-labor Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1796007678 Root hash: 7ecb7cb10a6a5d5d2cafc8c5d145dd09eaf2664b57f0825df63c0a9786b1369f1fef6ef94f64879d37b4267757441b2c (root) ConsistencyTestingToolState / sunny-normal-essence-produce 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 quiz-egg-game-puppy 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 5714296999644095256 /3 enhance-know-improve-fuel 4 StringLeaf 531 /4 inner-box-movie-dash | |||||||||
| node2 | 5m 59.732s | 2025-09-24 15:22:57.815 | 6241 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Sending signatures from nodes 0, 2, 3 (signing weight = 37500000000/50000000000) for state hash 7ecb7cb10a6a5d5d2cafc8c5d145dd09eaf2664b57f0825df63c0a9786b1369f1fef6ef94f64879d37b4267757441b2c | |
| node2 | 5m 59.732s | 2025-09-24 15:22:57.815 | 6242 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Starting synchronization in the role of the sender. | |
| node2 | 5m 59.737s | 2025-09-24 15:22:57.820 | 6243 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | TeachingSynchronizer: | sending tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route [] | |
| node2 | 5m 59.745s | 2025-09-24 15:22:57.828 | 6244 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@4c50ec38 start run() | |
| node4 | 5m 59.795s | 2025-09-24 15:22:57.878 | 184 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":280} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node4 | 5m 59.797s | 2025-09-24 15:22:57.880 | 185 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Receiving signed state signatures | |
| node4 | 5m 59.803s | 2025-09-24 15:22:57.886 | 186 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Received signatures from nodes 0, 2, 3 | |
| node4 | 5m 59.806s | 2025-09-24 15:22:57.889 | 187 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls receiveTree() | |
| node4 | 5m 59.806s | 2025-09-24 15:22:57.889 | 188 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | synchronizing tree | |
| node4 | 5m 59.807s | 2025-09-24 15:22:57.890 | 189 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | receiving tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route [] | |
| node4 | 5m 59.813s | 2025-09-24 15:22:57.896 | 190 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3fe9c972 start run() | |
| node4 | 5m 59.817s | 2025-09-24 15:22:57.900 | 191 | INFO | STARTUP | <<work group learning-synchronizer: async-input-stream #0>> | ConsistencyTestingToolState: | New State Constructed. | |
| node2 | 5m 59.898s | 2025-09-24 15:22:57.981 | 6263 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@4c50ec38 finish run() | |
| node2 | 5m 59.899s | 2025-09-24 15:22:57.982 | 6264 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | TeachingSynchronizer: | finished sending tree | |
| node2 | 5m 59.899s | 2025-09-24 15:22:57.982 | 6265 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | TeachingSynchronizer: | sending tree rooted at com.swirlds.virtualmap.VirtualMap with route [2] | |
| node2 | 5m 59.900s | 2025-09-24 15:22:57.983 | 6266 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@150dba43 start run() | |
| node4 | 6.000m | 2025-09-24 15:22:58.093 | 213 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread finished the learning loop for the current subtree | |
| node4 | 6.000m | 2025-09-24 15:22:58.093 | 214 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread closed input, output, and view for the current subtree | |
| node4 | 6.000m | 2025-09-24 15:22:58.094 | 215 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3fe9c972 finish run() | |
| node4 | 6.000m | 2025-09-24 15:22:58.095 | 216 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | received tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route [] | |
| node4 | 6.000m | 2025-09-24 15:22:58.096 | 217 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route [2] | |
| node4 | 6.000m | 2025-09-24 15:22:58.102 | 218 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@1634e51 start run() | |
| node4 | 6.001m | 2025-09-24 15:22:58.161 | 219 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): firstLeafPath: 1 -> 1, lastLeafPath: 1 -> 1 | |
| node4 | 6.001m | 2025-09-24 15:22:58.162 | 220 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): done | |
| node4 | 6.001m | 2025-09-24 15:22:58.164 | 221 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread finished the learning loop for the current subtree | |
| node4 | 6.001m | 2025-09-24 15:22:58.164 | 222 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call nodeRemover.allNodesReceived() | |
| node4 | 6.001m | 2025-09-24 15:22:58.165 | 223 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived() | |
| node4 | 6.001m | 2025-09-24 15:22:58.165 | 224 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived(): done | |
| node4 | 6.001m | 2025-09-24 15:22:58.166 | 225 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call root.endLearnerReconnect() | |
| node4 | 6.001m | 2025-09-24 15:22:58.166 | 226 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call reconnectIterator.close() | |
| node4 | 6.001m | 2025-09-24 15:22:58.166 | 227 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call setHashPrivate() | |
| node2 | 6.003m | 2025-09-24 15:22:58.234 | 6270 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@150dba43 finish run() | |
| node2 | 6.003m | 2025-09-24 15:22:58.235 | 6271 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | TeachingSynchronizer: | finished sending tree | |
| node2 | 6.003m | 2025-09-24 15:22:58.237 | 6274 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Finished synchronization in the role of the sender. | |
| node4 | 6.004m | 2025-09-24 15:22:58.327 | 237 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call postInit() | |
| node4 | 6.004m | 2025-09-24 15:22:58.328 | 239 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | endLearnerReconnect() complete | |
| node4 | 6.004m | 2025-09-24 15:22:58.328 | 240 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | close() complete | |
| node4 | 6.004m | 2025-09-24 15:22:58.328 | 241 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread closed input, output, and view for the current subtree | |
| node4 | 6.004m | 2025-09-24 15:22:58.329 | 242 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@1634e51 finish run() | |
| node4 | 6.004m | 2025-09-24 15:22:58.331 | 243 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | received tree rooted at com.swirlds.virtualmap.VirtualMap with route [2] | |
| node4 | 6.004m | 2025-09-24 15:22:58.331 | 244 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | synchronization complete | |
| node4 | 6.004m | 2025-09-24 15:22:58.331 | 245 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls initialize() | |
| node4 | 6.004m | 2025-09-24 15:22:58.332 | 246 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | initializing tree | |
| node4 | 6.004m | 2025-09-24 15:22:58.332 | 247 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | initialization complete | |
| node4 | 6.004m | 2025-09-24 15:22:58.332 | 248 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls hash() | |
| node4 | 6.004m | 2025-09-24 15:22:58.333 | 249 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | hashing tree | |
| node4 | 6.004m | 2025-09-24 15:22:58.334 | 250 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | hashing complete | |
| node4 | 6.004m | 2025-09-24 15:22:58.334 | 251 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner calls logStatistics() | |
| node4 | 6.004m | 2025-09-24 15:22:58.339 | 252 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | Finished synchronization {"timeInSeconds":0.442,"hashTimeInSeconds":0.001,"initializationTimeInSeconds":0.0,"totalNodes":12,"leafNodes":7,"redundantLeafNodes":4,"internalNodes":5,"redundantInternalNodes":2} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload] | |
| node4 | 6.004m | 2025-09-24 15:22:58.340 | 253 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | ReconnectMapMetrics: transfersFromTeacher=12; transfersFromLearner=10; internalHashes=0; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=3; leafCleanHashes=3; leafData=7; leafCleanData=4 | |
| node4 | 6.004m | 2025-09-24 15:22:58.340 | 254 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | LearningSynchronizer: | learner is done synchronizing | |
| node4 | 6.004m | 2025-09-24 15:22:58.344 | 255 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectLearner: | Reconnect data usage report {"dataMegabytes":0.006053924560546875} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload] | |
| node4 | 6.004m | 2025-09-24 15:22:58.348 | 256 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":531,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 6.004m | 2025-09-24 15:22:58.350 | 257 | INFO | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectSyncHelper: | Information for state received during reconnect: | |
| Round: 531 Timestamp: 2025-09-24T15:22:56.005029488Z Next consensus number: 11467 Legacy running event hash: 2f31a60b7f99dc1eeada6300bbfa205d8cb7de969e2ffbb16ceb308b93a289eef59119f20fcc3b064fbb12ebaf10eb1b Legacy running event mnemonic: tower-dirt-veteran-labor Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1796007678 Root hash: 7ecb7cb10a6a5d5d2cafc8c5d145dd09eaf2664b57f0825df63c0a9786b1369f1fef6ef94f64879d37b4267757441b2c (root) ConsistencyTestingToolState / sunny-normal-essence-produce 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 quiz-egg-game-puppy 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 5714296999644095256 /3 enhance-know-improve-fuel 4 StringLeaf 531 /4 inner-box-movie-dash | |||||||||
| node4 | 6.004m | 2025-09-24 15:22:58.352 | 259 | DEBUG | RECONNECT | <<reconnect: reconnect-controller>> | ReconnectStateLoader: | `loadReconnectState` : reloading state | |
| node4 | 6.004m | 2025-09-24 15:22:58.352 | 260 | INFO | STARTUP | <<reconnect: reconnect-controller>> | ConsistencyTestingToolState: | State initialized with state long 5714296999644095256. | |
| node4 | 6.005m | 2025-09-24 15:22:58.353 | 261 | INFO | STARTUP | <<reconnect: reconnect-controller>> | ConsistencyTestingToolState: | State initialized with 531 rounds handled. | |
| node4 | 6.005m | 2025-09-24 15:22:58.353 | 262 | INFO | STARTUP | <<reconnect: reconnect-controller>> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 6.005m | 2025-09-24 15:22:58.353 | 263 | INFO | STARTUP | <<reconnect: reconnect-controller>> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 6.005m | 2025-09-24 15:22:58.378 | 268 | INFO | STATE_TO_DISK | <<reconnect: reconnect-controller>> | DefaultSavedStateController: | Signed state from round 531 created, will eventually be written to disk, for reason: RECONNECT | |
| node4 | 6.005m | 2025-09-24 15:22:58.379 | 269 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | DefaultStatusStateMachine: | Platform spent 805.0 ms in BEHIND. Now in RECONNECT_COMPLETE | |
| node4 | 6.005m | 2025-09-24 15:22:58.380 | 271 | INFO | STARTUP | <platformForkJoinThread-4> | Shadowgraph: | Shadowgraph starting from expiration threshold 504 | |
| node4 | 6.005m | 2025-09-24 15:22:58.382 | 273 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 531 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/531 | |
| node4 | 6.005m | 2025-09-24 15:22:58.383 | 274 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/2 for round 531 | |
| node4 | 6.005m | 2025-09-24 15:22:58.384 | 275 | INFO | EVENT_STREAM | <<reconnect: reconnect-controller>> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 2f31a60b7f99dc1eeada6300bbfa205d8cb7de969e2ffbb16ceb308b93a289eef59119f20fcc3b064fbb12ebaf10eb1b | |
| node4 | 6.005m | 2025-09-24 15:22:58.385 | 276 | INFO | STARTUP | <platformForkJoinThread-3> | PcesFileManager: | Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr281_orgn0.pces. All future files will have an origin round of 531. | |
| node2 | 6.006m | 2025-09-24 15:22:58.417 | 6275 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectTeacher: | Finished reconnect in the role of the sender. {"receiving":false,"nodeId":2,"otherNodeId":4,"round":531,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 6.007m | 2025-09-24 15:22:58.532 | 311 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/2 for round 531 | |
| node4 | 6.008m | 2025-09-24 15:22:58.536 | 312 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 531 Timestamp: 2025-09-24T15:22:56.005029488Z Next consensus number: 11467 Legacy running event hash: 2f31a60b7f99dc1eeada6300bbfa205d8cb7de969e2ffbb16ceb308b93a289eef59119f20fcc3b064fbb12ebaf10eb1b Legacy running event mnemonic: tower-dirt-veteran-labor Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1796007678 Root hash: 7ecb7cb10a6a5d5d2cafc8c5d145dd09eaf2664b57f0825df63c0a9786b1369f1fef6ef94f64879d37b4267757441b2c (root) ConsistencyTestingToolState / sunny-normal-essence-produce 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 quiz-egg-game-puppy 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 5714296999644095256 /3 enhance-know-improve-fuel 4 StringLeaf 531 /4 inner-box-movie-dash | |||||||||
| node4 | 6.008m | 2025-09-24 15:22:58.574 | 313 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr281_orgn0.pces | |||||||||
| node4 | 6.008m | 2025-09-24 15:22:58.575 | 314 | WARN | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | No preconsensus event files meeting specified criteria found to copy. Lower bound: 504 | |
| node4 | 6.008m | 2025-09-24 15:22:58.581 | 315 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 531 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/531 {"round":531,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/531/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 6.008m | 2025-09-24 15:22:58.592 | 316 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | DefaultStatusStateMachine: | Platform spent 209.0 ms in RECONNECT_COMPLETE. Now in CHECKING | |
| node4 | 6m 1.194s | 2025-09-24 15:22:59.277 | 317 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:c9ea174bcb5a BR:529), num remaining: 3 | |
| node4 | 6m 1.196s | 2025-09-24 15:22:59.279 | 318 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:da67e681920c BR:530), num remaining: 2 | |
| node4 | 6m 1.196s | 2025-09-24 15:22:59.279 | 319 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:9d36cf5b9d30 BR:530), num remaining: 1 | |
| node4 | 6m 1.199s | 2025-09-24 15:22:59.282 | 320 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:f5257fb85c73 BR:530), num remaining: 0 | |
| node4 | 6m 1.366s | 2025-09-24 15:22:59.449 | 343 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 6m 1.369s | 2025-09-24 15:22:59.452 | 344 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node3 | 6m 3.509s | 2025-09-24 15:23:01.592 | 6409 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 538 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 6m 3.568s | 2025-09-24 15:23:01.651 | 6334 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 538 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 6m 3.653s | 2025-09-24 15:23:01.736 | 6443 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 538 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 6m 3.670s | 2025-09-24 15:23:01.753 | 378 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 538 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 6m 3.687s | 2025-09-24 15:23:01.770 | 6485 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 538 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 6m 3.821s | 2025-09-24 15:23:01.904 | 6498 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 538 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/538 | |
| node1 | 6m 3.821s | 2025-09-24 15:23:01.904 | 6499 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 538 | |
| node1 | 6m 3.907s | 2025-09-24 15:23:01.990 | 6534 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 538 | |
| node1 | 6m 3.909s | 2025-09-24 15:23:01.992 | 6535 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 538 Timestamp: 2025-09-24T15:23:00.234625Z Next consensus number: 11575 Legacy running event hash: 8988a965f75b9e7c76195f770e4203afeadf05e9e1b86a313846d4cde90b57b16da8fc4150791e769e6491b80f5cd618 Legacy running event mnemonic: someone-square-believe-correct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1483684824 Root hash: 19be50237d0303b0ee22f79dbfdfd3debd6e8958596eb73f785675d05a96b984fc420743f3b3dfbc71e465bf01a5ebc6 (root) ConsistencyTestingToolState / daughter-afford-parrot-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 wide-cave-sell-magnet 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -602722899277309548 /3 huge-still-long-city 4 StringLeaf 538 /4 hundred-elevator-aunt-vast | |||||||||
| node1 | 6m 3.915s | 2025-09-24 15:23:01.998 | 6536 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+22+37.197443980Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 6m 3.916s | 2025-09-24 15:23:01.999 | 6537 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 511 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+22+37.197443980Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 6m 3.916s | 2025-09-24 15:23:01.999 | 6538 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 6m 3.917s | 2025-09-24 15:23:02.000 | 6539 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 6m 3.917s | 2025-09-24 15:23:02.000 | 6540 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 538 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/538 {"round":538,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/538/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 6m 3.918s | 2025-09-24 15:23:02.001 | 6541 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/66 | |
| node4 | 6m 3.939s | 2025-09-24 15:23:02.022 | 390 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 538 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/538 | |
| node4 | 6m 3.940s | 2025-09-24 15:23:02.023 | 391 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 538 | |
| node3 | 6m 4.008s | 2025-09-24 15:23:02.091 | 6412 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 538 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/538 | |
| node3 | 6m 4.009s | 2025-09-24 15:23:02.092 | 6413 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 538 | |
| node0 | 6m 4.034s | 2025-09-24 15:23:02.117 | 6456 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 538 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/538 | |
| node0 | 6m 4.035s | 2025-09-24 15:23:02.118 | 6457 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 538 | |
| node4 | 6m 4.065s | 2025-09-24 15:23:02.148 | 429 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 538 | |
| node4 | 6m 4.067s | 2025-09-24 15:23:02.150 | 430 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 538 Timestamp: 2025-09-24T15:23:00.234625Z Next consensus number: 11575 Legacy running event hash: 8988a965f75b9e7c76195f770e4203afeadf05e9e1b86a313846d4cde90b57b16da8fc4150791e769e6491b80f5cd618 Legacy running event mnemonic: someone-square-believe-correct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1483684824 Root hash: 19be50237d0303b0ee22f79dbfdfd3debd6e8958596eb73f785675d05a96b984fc420743f3b3dfbc71e465bf01a5ebc6 (root) ConsistencyTestingToolState / daughter-afford-parrot-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 wide-cave-sell-magnet 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -602722899277309548 /3 huge-still-long-city 4 StringLeaf 538 /4 hundred-elevator-aunt-vast | |||||||||
| node4 | 6m 4.079s | 2025-09-24 15:23:02.162 | 431 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr281_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+22+58.713938941Z_seq1_minr504_maxr1004_orgn531.pces | |||||||||
| node4 | 6m 4.081s | 2025-09-24 15:23:02.164 | 432 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 511 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+22+58.713938941Z_seq1_minr504_maxr1004_orgn531.pces | |||||||||
| node4 | 6m 4.081s | 2025-09-24 15:23:02.164 | 433 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 6m 4.084s | 2025-09-24 15:23:02.167 | 434 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 6m 4.085s | 2025-09-24 15:23:02.168 | 435 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 538 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/538 {"round":538,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/538/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 6m 4.088s | 2025-09-24 15:23:02.171 | 436 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |
| node3 | 6m 4.091s | 2025-09-24 15:23:02.174 | 6444 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 538 | |
| node3 | 6m 4.094s | 2025-09-24 15:23:02.177 | 6445 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 538 Timestamp: 2025-09-24T15:23:00.234625Z Next consensus number: 11575 Legacy running event hash: 8988a965f75b9e7c76195f770e4203afeadf05e9e1b86a313846d4cde90b57b16da8fc4150791e769e6491b80f5cd618 Legacy running event mnemonic: someone-square-believe-correct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1483684824 Root hash: 19be50237d0303b0ee22f79dbfdfd3debd6e8958596eb73f785675d05a96b984fc420743f3b3dfbc71e465bf01a5ebc6 (root) ConsistencyTestingToolState / daughter-afford-parrot-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 wide-cave-sell-magnet 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -602722899277309548 /3 huge-still-long-city 4 StringLeaf 538 /4 hundred-elevator-aunt-vast | |||||||||
| node3 | 6m 4.102s | 2025-09-24 15:23:02.185 | 6446 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+22+37.383265771Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 6m 4.103s | 2025-09-24 15:23:02.186 | 6447 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 511 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+22+37.383265771Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 6m 4.103s | 2025-09-24 15:23:02.186 | 6448 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 6m 4.104s | 2025-09-24 15:23:02.187 | 6449 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 6m 4.105s | 2025-09-24 15:23:02.188 | 6450 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 538 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/538 {"round":538,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/538/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 6m 4.106s | 2025-09-24 15:23:02.189 | 6451 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/66 | |
| node0 | 6m 4.135s | 2025-09-24 15:23:02.218 | 6492 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/37 for round 538 | |
| node0 | 6m 4.138s | 2025-09-24 15:23:02.221 | 6493 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 538 Timestamp: 2025-09-24T15:23:00.234625Z Next consensus number: 11575 Legacy running event hash: 8988a965f75b9e7c76195f770e4203afeadf05e9e1b86a313846d4cde90b57b16da8fc4150791e769e6491b80f5cd618 Legacy running event mnemonic: someone-square-believe-correct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1483684824 Root hash: 19be50237d0303b0ee22f79dbfdfd3debd6e8958596eb73f785675d05a96b984fc420743f3b3dfbc71e465bf01a5ebc6 (root) ConsistencyTestingToolState / daughter-afford-parrot-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 wide-cave-sell-magnet 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -602722899277309548 /3 huge-still-long-city 4 StringLeaf 538 /4 hundred-elevator-aunt-vast | |||||||||
| node2 | 6m 4.142s | 2025-09-24 15:23:02.225 | 6347 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 538 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/538 | |
| node2 | 6m 4.142s | 2025-09-24 15:23:02.225 | 6348 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/38 for round 538 | |
| node0 | 6m 4.145s | 2025-09-24 15:23:02.228 | 6494 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+22+37.263047686Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 6m 4.146s | 2025-09-24 15:23:02.229 | 6495 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 511 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+22+37.263047686Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 6m 4.146s | 2025-09-24 15:23:02.229 | 6496 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 6m 4.147s | 2025-09-24 15:23:02.230 | 6497 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 6m 4.147s | 2025-09-24 15:23:02.230 | 6498 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 538 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/538 {"round":538,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/538/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 6m 4.149s | 2025-09-24 15:23:02.232 | 6499 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/66 | |
| node2 | 6m 4.232s | 2025-09-24 15:23:02.315 | 6379 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/38 for round 538 | |
| node2 | 6m 4.234s | 2025-09-24 15:23:02.317 | 6380 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 538 Timestamp: 2025-09-24T15:23:00.234625Z Next consensus number: 11575 Legacy running event hash: 8988a965f75b9e7c76195f770e4203afeadf05e9e1b86a313846d4cde90b57b16da8fc4150791e769e6491b80f5cd618 Legacy running event mnemonic: someone-square-believe-correct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1483684824 Root hash: 19be50237d0303b0ee22f79dbfdfd3debd6e8958596eb73f785675d05a96b984fc420743f3b3dfbc71e465bf01a5ebc6 (root) ConsistencyTestingToolState / daughter-afford-parrot-invite 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 wide-cave-sell-magnet 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf -602722899277309548 /3 huge-still-long-city 4 StringLeaf 538 /4 hundred-elevator-aunt-vast | |||||||||
| node2 | 6m 4.242s | 2025-09-24 15:23:02.325 | 6381 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+22+37.376417967Z_seq1_minr473_maxr5473_orgn0.pces | |||||||||
| node2 | 6m 4.242s | 2025-09-24 15:23:02.325 | 6382 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 511 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+22+37.376417967Z_seq1_minr473_maxr5473_orgn0.pces | |||||||||
| node2 | 6m 4.242s | 2025-09-24 15:23:02.325 | 6383 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 6m 4.243s | 2025-09-24 15:23:02.326 | 6384 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 6m 4.244s | 2025-09-24 15:23:02.327 | 6385 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 538 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/538 {"round":538,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/538/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 6m 4.246s | 2025-09-24 15:23:02.329 | 6386 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/66 | |
| node4 | 6m 5.097s | 2025-09-24 15:23:03.180 | 438 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | DefaultStatusStateMachine: | Platform spent 4.6 s in CHECKING. Now in ACTIVE | |
| node1 | 7m 3.927s | 2025-09-24 15:24:02.010 | 7616 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 634 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 7m 3.988s | 2025-09-24 15:24:02.071 | 7445 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 634 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 7m 4.088s | 2025-09-24 15:24:02.171 | 1474 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 634 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 7m 4.103s | 2025-09-24 15:24:02.186 | 7568 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 634 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 7m 4.121s | 2025-09-24 15:24:02.204 | 7518 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 634 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 7m 4.315s | 2025-09-24 15:24:02.398 | 7619 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 634 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/634 | |
| node1 | 7m 4.316s | 2025-09-24 15:24:02.399 | 7620 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 634 | |
| node0 | 7m 4.354s | 2025-09-24 15:24:02.437 | 7571 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 634 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/634 | |
| node0 | 7m 4.355s | 2025-09-24 15:24:02.438 | 7572 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 634 | |
| node2 | 7m 4.386s | 2025-09-24 15:24:02.469 | 7448 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 634 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/634 | |
| node2 | 7m 4.387s | 2025-09-24 15:24:02.470 | 7449 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/44 for round 634 | |
| node1 | 7m 4.406s | 2025-09-24 15:24:02.489 | 7651 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 634 | |
| node1 | 7m 4.409s | 2025-09-24 15:24:02.492 | 7652 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 634 Timestamp: 2025-09-24T15:24:00.593935867Z Next consensus number: 14116 Legacy running event hash: ca56fc9af47abf8f2ede6de5698c7ef91611403e136293df341cca664e72277e9d3fbd2745dcbeb6f54d6cff7591ac23 Legacy running event mnemonic: relief-holiday-ignore-athlete Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1870210161 Root hash: 881e4a79b5991c4fa5f69f583c4316fffd2c091cb2bbab154470b05f8975e8a2470740d4095938a039d10ca5fbb4f8d7 (root) ConsistencyTestingToolState / wisdom-execute-erupt-measure 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 frown-entire-cute-empty 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 5179636551878878282 /3 pear-link-siege-call 4 StringLeaf 634 /4 pigeon-trust-amused-tent | |||||||||
| node1 | 7m 4.415s | 2025-09-24 15:24:02.498 | 7653 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+22+37.197443980Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+17+14.326215315Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 7m 4.418s | 2025-09-24 15:24:02.501 | 7654 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 607 File: data/saved/preconsensus-events/1/2025/09/24/2025-09-24T15+22+37.197443980Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 7m 4.418s | 2025-09-24 15:24:02.501 | 7655 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 7m 4.421s | 2025-09-24 15:24:02.504 | 7656 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 7m 4.421s | 2025-09-24 15:24:02.504 | 7657 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 634 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/634 {"round":634,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/634/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 7m 4.422s | 2025-09-24 15:24:02.505 | 7658 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/163 | |
| node0 | 7m 4.456s | 2025-09-24 15:24:02.539 | 7603 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 634 | |
| node0 | 7m 4.460s | 2025-09-24 15:24:02.543 | 7604 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 634 Timestamp: 2025-09-24T15:24:00.593935867Z Next consensus number: 14116 Legacy running event hash: ca56fc9af47abf8f2ede6de5698c7ef91611403e136293df341cca664e72277e9d3fbd2745dcbeb6f54d6cff7591ac23 Legacy running event mnemonic: relief-holiday-ignore-athlete Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1870210161 Root hash: 881e4a79b5991c4fa5f69f583c4316fffd2c091cb2bbab154470b05f8975e8a2470740d4095938a039d10ca5fbb4f8d7 (root) ConsistencyTestingToolState / wisdom-execute-erupt-measure 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 frown-entire-cute-empty 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 5179636551878878282 /3 pear-link-siege-call 4 StringLeaf 634 /4 pigeon-trust-amused-tent | |||||||||
| node0 | 7m 4.468s | 2025-09-24 15:24:02.551 | 7613 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+17+14.781569826Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+22+37.263047686Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 7m 4.470s | 2025-09-24 15:24:02.553 | 7614 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 607 File: data/saved/preconsensus-events/0/2025/09/24/2025-09-24T15+22+37.263047686Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 7m 4.470s | 2025-09-24 15:24:02.553 | 7615 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 7m 4.473s | 2025-09-24 15:24:02.556 | 7616 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 7m 4.473s | 2025-09-24 15:24:02.556 | 7617 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 634 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/634 {"round":634,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/634/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 7m 4.475s | 2025-09-24 15:24:02.558 | 7618 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/163 | |
| node2 | 7m 4.477s | 2025-09-24 15:24:02.560 | 7480 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/44 for round 634 | |
| node2 | 7m 4.480s | 2025-09-24 15:24:02.563 | 7481 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 634 Timestamp: 2025-09-24T15:24:00.593935867Z Next consensus number: 14116 Legacy running event hash: ca56fc9af47abf8f2ede6de5698c7ef91611403e136293df341cca664e72277e9d3fbd2745dcbeb6f54d6cff7591ac23 Legacy running event mnemonic: relief-holiday-ignore-athlete Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1870210161 Root hash: 881e4a79b5991c4fa5f69f583c4316fffd2c091cb2bbab154470b05f8975e8a2470740d4095938a039d10ca5fbb4f8d7 (root) ConsistencyTestingToolState / wisdom-execute-erupt-measure 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 frown-entire-cute-empty 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 5179636551878878282 /3 pear-link-siege-call 4 StringLeaf 634 /4 pigeon-trust-amused-tent | |||||||||
| node2 | 7m 4.487s | 2025-09-24 15:24:02.570 | 7482 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+17+14.652643610Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+22+37.376417967Z_seq1_minr473_maxr5473_orgn0.pces | |||||||||
| node2 | 7m 4.487s | 2025-09-24 15:24:02.570 | 7483 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 607 File: data/saved/preconsensus-events/2/2025/09/24/2025-09-24T15+22+37.376417967Z_seq1_minr473_maxr5473_orgn0.pces | |||||||||
| node2 | 7m 4.488s | 2025-09-24 15:24:02.571 | 7484 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 7m 4.490s | 2025-09-24 15:24:02.573 | 7485 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 7m 4.491s | 2025-09-24 15:24:02.574 | 7486 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 634 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/634 {"round":634,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/634/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 7m 4.492s | 2025-09-24 15:24:02.575 | 7487 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/163 | |
| node3 | 7m 4.543s | 2025-09-24 15:24:02.626 | 7521 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 634 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/634 | |
| node3 | 7m 4.544s | 2025-09-24 15:24:02.627 | 7522 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 634 | |
| node4 | 7m 4.581s | 2025-09-24 15:24:02.664 | 1477 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 634 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/634 | |
| node4 | 7m 4.581s | 2025-09-24 15:24:02.664 | 1478 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/14 for round 634 | |
| node3 | 7m 4.635s | 2025-09-24 15:24:02.718 | 7557 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 634 | |
| node3 | 7m 4.637s | 2025-09-24 15:24:02.720 | 7558 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 634 Timestamp: 2025-09-24T15:24:00.593935867Z Next consensus number: 14116 Legacy running event hash: ca56fc9af47abf8f2ede6de5698c7ef91611403e136293df341cca664e72277e9d3fbd2745dcbeb6f54d6cff7591ac23 Legacy running event mnemonic: relief-holiday-ignore-athlete Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1870210161 Root hash: 881e4a79b5991c4fa5f69f583c4316fffd2c091cb2bbab154470b05f8975e8a2470740d4095938a039d10ca5fbb4f8d7 (root) ConsistencyTestingToolState / wisdom-execute-erupt-measure 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 frown-entire-cute-empty 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 5179636551878878282 /3 pear-link-siege-call 4 StringLeaf 634 /4 pigeon-trust-amused-tent | |||||||||
| node3 | 7m 4.644s | 2025-09-24 15:24:02.727 | 7559 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+17+14.965433115Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+22+37.383265771Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 7m 4.646s | 2025-09-24 15:24:02.729 | 7560 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 607 File: data/saved/preconsensus-events/3/2025/09/24/2025-09-24T15+22+37.383265771Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 7m 4.646s | 2025-09-24 15:24:02.729 | 7561 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 7m 4.649s | 2025-09-24 15:24:02.732 | 7562 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 7m 4.649s | 2025-09-24 15:24:02.732 | 7563 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 634 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/634 {"round":634,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/634/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 7m 4.651s | 2025-09-24 15:24:02.734 | 7564 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/163 | |
| node4 | 7m 4.693s | 2025-09-24 15:24:02.776 | 1512 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/14 for round 634 | |
| node4 | 7m 4.695s | 2025-09-24 15:24:02.778 | 1513 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 634 Timestamp: 2025-09-24T15:24:00.593935867Z Next consensus number: 14116 Legacy running event hash: ca56fc9af47abf8f2ede6de5698c7ef91611403e136293df341cca664e72277e9d3fbd2745dcbeb6f54d6cff7591ac23 Legacy running event mnemonic: relief-holiday-ignore-athlete Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1870210161 Root hash: 881e4a79b5991c4fa5f69f583c4316fffd2c091cb2bbab154470b05f8975e8a2470740d4095938a039d10ca5fbb4f8d7 (root) ConsistencyTestingToolState / wisdom-execute-erupt-measure 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 frown-entire-cute-empty 1 SingletonNode RosterService.ROSTER_STATE /1 hair-floor-leader-moral 2 VirtualMap RosterService.ROSTERS /2 twin-west-cage-surprise 3 StringLeaf 5179636551878878282 /3 pear-link-siege-call 4 StringLeaf 634 /4 pigeon-trust-amused-tent | |||||||||
| node4 | 7m 4.706s | 2025-09-24 15:24:02.789 | 1514 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+17+14.441776136Z_seq0_minr1_maxr281_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+22+58.713938941Z_seq1_minr504_maxr1004_orgn531.pces | |||||||||
| node4 | 7m 4.707s | 2025-09-24 15:24:02.790 | 1515 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 607 File: data/saved/preconsensus-events/4/2025/09/24/2025-09-24T15+22+58.713938941Z_seq1_minr504_maxr1004_orgn531.pces | |||||||||
| node4 | 7m 4.707s | 2025-09-24 15:24:02.790 | 1516 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 7m 4.709s | 2025-09-24 15:24:02.792 | 1517 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 7m 4.710s | 2025-09-24 15:24:02.793 | 1518 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 634 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/634 {"round":634,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/634/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 7m 4.712s | 2025-09-24 15:24:02.795 | 1519 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/66 | |
| node1 | 7m 55.881s | 2025-09-24 15:24:53.964 | 8565 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith3 1 to 3>> | NetworkUtils: | Connection broken: 1 -> 3 | |
| java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.FilterInputStream.read(FilterInputStream.java:71) at org.hiero.base.io.streams.AugmentedDataInputStream.read(AugmentedDataInputStream.java:57) at com.swirlds.platform.network.communication.states.SentKeepalive.transition(SentKeepalive.java:44) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) | |||||||||
| node0 | 7m 55.882s | 2025-09-24 15:24:53.965 | 8481 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith3 0 to 3>> | NetworkUtils: | Connection broken: 0 -> 3 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:24:53.964725251Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:24:53.964725251Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:180) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readBoolean(DataInputStream.java:255) at org.hiero.base.io.streams.AugmentedDataInputStream.readBoolean(AugmentedDataInputStream.java:137) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readMyTipsTheyHave$7(SyncUtils.java:163) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 11 more | |||||||||
| node2 | 7m 55.882s | 2025-09-24 15:24:53.965 | 8368 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith3 2 to 3>> | NetworkUtils: | Connection broken: 2 -> 3 | |
| java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.FilterInputStream.read(FilterInputStream.java:71) at org.hiero.base.io.streams.AugmentedDataInputStream.read(AugmentedDataInputStream.java:57) at com.swirlds.platform.network.communication.states.SentInitiate.transition(SentInitiate.java:73) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) | |||||||||
| node0 | 7m 56.144s | 2025-09-24 15:24:54.227 | 8482 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 0 to 4>> | NetworkUtils: | Connection broken: 0 -> 4 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:24:54.227068579Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:24:54.227068579Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:241) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:201) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:278) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 12 more | |||||||||
| node0 | 7m 56.211s | 2025-09-24 15:24:54.294 | 8483 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith2 0 to 2>> | NetworkUtils: | Connection broken: 0 -> 2 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:24:54.294373426Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:24:54.294373426Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readLong(DataInputStream.java:407) at org.hiero.base.io.streams.AugmentedDataInputStream.readLong(AugmentedDataInputStream.java:186) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.deserializeEventWindow(SyncUtils.java:640) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readTheirTipsAndEventWindow$3(SyncUtils.java:104) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 11 more | |||||||||
| node0 | 7m 56.216s | 2025-09-24 15:24:54.299 | 8484 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith1 0 to 1>> | NetworkUtils: | Connection broken: 0 -> 1 | |
| java.io.IOException: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:24:54.298996655Z at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:258) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-09-24T15:24:54.298996655Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:148) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.readWriteParallel(ShadowgraphSynchronizer.java:304) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.sendAndReceiveEvents(ShadowgraphSynchronizer.java:241) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.reserveSynchronize(ShadowgraphSynchronizer.java:201) at com.swirlds.platform.gossip.shadowgraph.ShadowgraphSynchronizer.synchronize(ShadowgraphSynchronizer.java:113) at com.swirlds.platform.gossip.sync.protocol.SyncPeerProtocol.runProtocol(SyncPeerProtocol.java:254) ... 7 more Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 12 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$sendEventsTheyNeed$8(SyncUtils.java:234) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:325) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:312) at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:295) at java.base/java.io.DataInputStream.readByte(DataInputStream.java:275) at org.hiero.base.io.streams.AugmentedDataInputStream.readByte(AugmentedDataInputStream.java:144) at com.swirlds.platform.gossip.shadowgraph.SyncUtils.lambda$readEventsINeed$9(SyncUtils.java:278) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:146) ... 12 more | |||||||||