| node3 | 0.000ns | 2025-11-26 03:32:23.205 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node3 | 83.000ms | 2025-11-26 03:32:23.288 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node3 | 99.000ms | 2025-11-26 03:32:23.304 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node3 | 203.000ms | 2025-11-26 03:32:23.408 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [3] are set to run locally | |
| node3 | 231.000ms | 2025-11-26 03:32:23.436 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node1 | 317.000ms | 2025-11-26 03:32:23.522 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node1 | 419.000ms | 2025-11-26 03:32:23.624 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node1 | 438.000ms | 2025-11-26 03:32:23.643 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 549.000ms | 2025-11-26 03:32:23.754 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node1 | 561.000ms | 2025-11-26 03:32:23.766 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [1] are set to run locally | |
| node1 | 593.000ms | 2025-11-26 03:32:23.798 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node0 | 633.000ms | 2025-11-26 03:32:23.838 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node0 | 648.000ms | 2025-11-26 03:32:23.853 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 750.000ms | 2025-11-26 03:32:23.955 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [0] are set to run locally | |
| node0 | 777.000ms | 2025-11-26 03:32:23.982 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node2 | 1.202s | 2025-11-26 03:32:24.407 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node2 | 1.300s | 2025-11-26 03:32:24.505 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node2 | 1.318s | 2025-11-26 03:32:24.523 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 1.438s | 2025-11-26 03:32:24.643 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [2] are set to run locally | |
| node3 | 1.455s | 2025-11-26 03:32:24.660 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1223ms | |
| node3 | 1.463s | 2025-11-26 03:32:24.668 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node3 | 1.465s | 2025-11-26 03:32:24.670 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 1.468s | 2025-11-26 03:32:24.673 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node3 | 1.520s | 2025-11-26 03:32:24.725 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node3 | 1.584s | 2025-11-26 03:32:24.789 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node3 | 1.585s | 2025-11-26 03:32:24.790 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node1 | 1.898s | 2025-11-26 03:32:25.103 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1304ms | |
| node1 | 1.907s | 2025-11-26 03:32:25.112 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node1 | 1.910s | 2025-11-26 03:32:25.115 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 1.945s | 2025-11-26 03:32:25.150 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node1 | 2.008s | 2025-11-26 03:32:25.213 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node1 | 2.008s | 2025-11-26 03:32:25.213 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node0 | 2.023s | 2025-11-26 03:32:25.228 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1245ms | |
| node0 | 2.032s | 2025-11-26 03:32:25.237 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node0 | 2.035s | 2025-11-26 03:32:25.240 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 2.070s | 2025-11-26 03:32:25.275 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node0 | 2.130s | 2025-11-26 03:32:25.335 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node0 | 2.131s | 2025-11-26 03:32:25.336 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 2.344s | 2025-11-26 03:32:25.549 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node4 | 2.441s | 2025-11-26 03:32:25.646 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node4 | 2.460s | 2025-11-26 03:32:25.665 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 2.573s | 2025-11-26 03:32:25.778 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 2.602s | 2025-11-26 03:32:25.807 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node2 | 3.010s | 2025-11-26 03:32:26.215 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1542ms | |
| node2 | 3.019s | 2025-11-26 03:32:26.224 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node2 | 3.022s | 2025-11-26 03:32:26.227 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 3.075s | 2025-11-26 03:32:26.280 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node2 | 3.143s | 2025-11-26 03:32:26.348 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node2 | 3.144s | 2025-11-26 03:32:26.349 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node3 | 3.600s | 2025-11-26 03:32:26.805 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node3 | 3.683s | 2025-11-26 03:32:26.888 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 3.685s | 2025-11-26 03:32:26.890 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node3 | 3.718s | 2025-11-26 03:32:26.923 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 4.013s | 2025-11-26 03:32:27.218 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1409ms | |
| node4 | 4.026s | 2025-11-26 03:32:27.231 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 4.030s | 2025-11-26 03:32:27.235 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 4.038s | 2025-11-26 03:32:27.243 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 4.074s | 2025-11-26 03:32:27.279 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node1 | 4.122s | 2025-11-26 03:32:27.327 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 4.124s | 2025-11-26 03:32:27.329 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node4 | 4.136s | 2025-11-26 03:32:27.341 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 4.137s | 2025-11-26 03:32:27.342 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node1 | 4.157s | 2025-11-26 03:32:27.362 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 4.207s | 2025-11-26 03:32:27.412 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node0 | 4.293s | 2025-11-26 03:32:27.498 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.295s | 2025-11-26 03:32:27.500 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node0 | 4.328s | 2025-11-26 03:32:27.533 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node3 | 4.449s | 2025-11-26 03:32:27.654 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 4.451s | 2025-11-26 03:32:27.656 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node3 | 4.456s | 2025-11-26 03:32:27.661 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node3 | 4.465s | 2025-11-26 03:32:27.670 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 4.467s | 2025-11-26 03:32:27.672 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 4.943s | 2025-11-26 03:32:28.148 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 4.945s | 2025-11-26 03:32:28.150 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node1 | 4.950s | 2025-11-26 03:32:28.155 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node1 | 4.961s | 2025-11-26 03:32:28.166 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 4.963s | 2025-11-26 03:32:28.168 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 5.050s | 2025-11-26 03:32:28.255 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 5.051s | 2025-11-26 03:32:28.256 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node0 | 5.056s | 2025-11-26 03:32:28.261 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node0 | 5.064s | 2025-11-26 03:32:28.269 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 5.067s | 2025-11-26 03:32:28.272 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 5.282s | 2025-11-26 03:32:28.487 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node2 | 5.403s | 2025-11-26 03:32:28.608 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 5.405s | 2025-11-26 03:32:28.610 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node2 | 5.443s | 2025-11-26 03:32:28.648 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node3 | 5.599s | 2025-11-26 03:32:28.804 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26514292] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=220570, randomLong=5607760513696664135, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=13760, randomLong=-3574208306013364394, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1385160, data=35, exception=null] OS Health Check Report - Complete (took 1023 ms) | |||||||||
| node3 | 5.628s | 2025-11-26 03:32:28.833 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node3 | 5.636s | 2025-11-26 03:32:28.841 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node3 | 5.637s | 2025-11-26 03:32:28.842 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node3 | 5.713s | 2025-11-26 03:32:28.918 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IhANzw==", "port": 30124 }, { "ipAddressV4": "CoAAOQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I8q0KA==", "port": 30125 }, { "ipAddressV4": "CoAAPQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "IilpIA==", "port": 30126 }, { "ipAddressV4": "CoAAOg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "aMbs8g==", "port": 30127 }, { "ipAddressV4": "CoAAOw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IkVH8g==", "port": 30128 }, { "ipAddressV4": "CoAAPA==", "port": 30128 }] }] } | |||||||||
| node3 | 5.734s | 2025-11-26 03:32:28.939 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv | |
| node3 | 5.735s | 2025-11-26 03:32:28.940 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node3 | 5.746s | 2025-11-26 03:32:28.951 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 25db5a13230118ba8b6bc8778dda9bc97d1bf00988e6dbc50ef8b0e21606c041ce5f51ca0df8a8b465c028d6cf9f9bf0 (root) VirtualMap state / rocket-draw-gasp-blanket | |||||||||
| node3 | 5.749s | 2025-11-26 03:32:28.954 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node3 | 5.933s | 2025-11-26 03:32:29.138 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node3 | 5.937s | 2025-11-26 03:32:29.142 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node3 | 5.942s | 2025-11-26 03:32:29.147 | 43 | INFO | STARTUP | <<start-node-3>> | ConsistencyTestingToolMain: | init called in Main for node 3. | |
| node3 | 5.942s | 2025-11-26 03:32:29.147 | 44 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | Starting platform 3 | |
| node3 | 5.943s | 2025-11-26 03:32:29.148 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node3 | 5.947s | 2025-11-26 03:32:29.152 | 46 | INFO | STARTUP | <<start-node-3>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node3 | 5.948s | 2025-11-26 03:32:29.153 | 47 | INFO | STARTUP | <<start-node-3>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node3 | 5.948s | 2025-11-26 03:32:29.153 | 48 | INFO | STARTUP | <<start-node-3>> | InputWireChecks: | All input wires have been bound. | |
| node3 | 5.950s | 2025-11-26 03:32:29.155 | 49 | WARN | STARTUP | <<start-node-3>> | PcesFileTracker: | No preconsensus event files available | |
| node3 | 5.951s | 2025-11-26 03:32:29.156 | 50 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node3 | 5.953s | 2025-11-26 03:32:29.158 | 51 | INFO | STARTUP | <<start-node-3>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node3 | 5.954s | 2025-11-26 03:32:29.159 | 52 | INFO | STARTUP | <<app: appMain 3>> | ConsistencyTestingToolMain: | run called in Main. | |
| node3 | 5.955s | 2025-11-26 03:32:29.160 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 154.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node3 | 5.960s | 2025-11-26 03:32:29.165 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node1 | 6.072s | 2025-11-26 03:32:29.277 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26309025] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=158480, randomLong=2443380264368142917, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=10160, randomLong=-412747575868487970, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1091009, data=35, exception=null] OS Health Check Report - Complete (took 1022 ms) | |||||||||
| node1 | 6.101s | 2025-11-26 03:32:29.306 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node1 | 6.109s | 2025-11-26 03:32:29.314 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node1 | 6.110s | 2025-11-26 03:32:29.315 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 6.141s | 2025-11-26 03:32:29.346 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node0 | 6.176s | 2025-11-26 03:32:29.381 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26343437] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=309340, randomLong=2936499039936799331, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=9140, randomLong=-5925382868577396924, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1145590, data=35, exception=null] OS Health Check Report - Complete (took 1022 ms) | |||||||||
| node1 | 6.190s | 2025-11-26 03:32:29.395 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IhANzw==", "port": 30124 }, { "ipAddressV4": "CoAAOQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I8q0KA==", "port": 30125 }, { "ipAddressV4": "CoAAPQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "IilpIA==", "port": 30126 }, { "ipAddressV4": "CoAAOg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "aMbs8g==", "port": 30127 }, { "ipAddressV4": "CoAAOw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IkVH8g==", "port": 30128 }, { "ipAddressV4": "CoAAPA==", "port": 30128 }] }] } | |||||||||
| node0 | 6.206s | 2025-11-26 03:32:29.411 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node1 | 6.212s | 2025-11-26 03:32:29.417 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv | |
| node1 | 6.213s | 2025-11-26 03:32:29.418 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node0 | 6.214s | 2025-11-26 03:32:29.419 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node0 | 6.215s | 2025-11-26 03:32:29.420 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node1 | 6.225s | 2025-11-26 03:32:29.430 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 25db5a13230118ba8b6bc8778dda9bc97d1bf00988e6dbc50ef8b0e21606c041ce5f51ca0df8a8b465c028d6cf9f9bf0 (root) VirtualMap state / rocket-draw-gasp-blanket | |||||||||
| node1 | 6.228s | 2025-11-26 03:32:29.433 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node4 | 6.232s | 2025-11-26 03:32:29.437 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6.235s | 2025-11-26 03:32:29.440 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node4 | 6.271s | 2025-11-26 03:32:29.476 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 6.296s | 2025-11-26 03:32:29.501 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IhANzw==", "port": 30124 }, { "ipAddressV4": "CoAAOQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I8q0KA==", "port": 30125 }, { "ipAddressV4": "CoAAPQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "IilpIA==", "port": 30126 }, { "ipAddressV4": "CoAAOg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "aMbs8g==", "port": 30127 }, { "ipAddressV4": "CoAAOw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IkVH8g==", "port": 30128 }, { "ipAddressV4": "CoAAPA==", "port": 30128 }] }] } | |||||||||
| node0 | 6.319s | 2025-11-26 03:32:29.524 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv | |
| node0 | 6.319s | 2025-11-26 03:32:29.524 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node0 | 6.331s | 2025-11-26 03:32:29.536 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 25db5a13230118ba8b6bc8778dda9bc97d1bf00988e6dbc50ef8b0e21606c041ce5f51ca0df8a8b465c028d6cf9f9bf0 (root) VirtualMap state / rocket-draw-gasp-blanket | |||||||||
| node0 | 6.334s | 2025-11-26 03:32:29.539 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node2 | 6.342s | 2025-11-26 03:32:29.547 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 6.346s | 2025-11-26 03:32:29.551 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node2 | 6.354s | 2025-11-26 03:32:29.559 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node2 | 6.367s | 2025-11-26 03:32:29.572 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 6.370s | 2025-11-26 03:32:29.575 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 6.446s | 2025-11-26 03:32:29.651 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node1 | 6.452s | 2025-11-26 03:32:29.657 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node1 | 6.456s | 2025-11-26 03:32:29.661 | 43 | INFO | STARTUP | <<start-node-1>> | ConsistencyTestingToolMain: | init called in Main for node 1. | |
| node1 | 6.457s | 2025-11-26 03:32:29.662 | 44 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | Starting platform 1 | |
| node1 | 6.458s | 2025-11-26 03:32:29.663 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node1 | 6.461s | 2025-11-26 03:32:29.666 | 46 | INFO | STARTUP | <<start-node-1>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node1 | 6.463s | 2025-11-26 03:32:29.668 | 47 | INFO | STARTUP | <<start-node-1>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node1 | 6.463s | 2025-11-26 03:32:29.668 | 48 | INFO | STARTUP | <<start-node-1>> | InputWireChecks: | All input wires have been bound. | |
| node1 | 6.465s | 2025-11-26 03:32:29.670 | 49 | WARN | STARTUP | <<start-node-1>> | PcesFileTracker: | No preconsensus event files available | |
| node1 | 6.466s | 2025-11-26 03:32:29.671 | 50 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node1 | 6.468s | 2025-11-26 03:32:29.673 | 51 | INFO | STARTUP | <<start-node-1>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node1 | 6.469s | 2025-11-26 03:32:29.674 | 52 | INFO | STARTUP | <<app: appMain 1>> | ConsistencyTestingToolMain: | run called in Main. | |
| node1 | 6.470s | 2025-11-26 03:32:29.675 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 184.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node1 | 6.476s | 2025-11-26 03:32:29.681 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node0 | 6.548s | 2025-11-26 03:32:29.753 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node0 | 6.552s | 2025-11-26 03:32:29.757 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node0 | 6.556s | 2025-11-26 03:32:29.761 | 43 | INFO | STARTUP | <<start-node-0>> | ConsistencyTestingToolMain: | init called in Main for node 0. | |
| node0 | 6.557s | 2025-11-26 03:32:29.762 | 44 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | Starting platform 0 | |
| node0 | 6.558s | 2025-11-26 03:32:29.763 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node0 | 6.561s | 2025-11-26 03:32:29.766 | 46 | INFO | STARTUP | <<start-node-0>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node0 | 6.562s | 2025-11-26 03:32:29.767 | 47 | INFO | STARTUP | <<start-node-0>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node0 | 6.562s | 2025-11-26 03:32:29.767 | 48 | INFO | STARTUP | <<start-node-0>> | InputWireChecks: | All input wires have been bound. | |
| node0 | 6.564s | 2025-11-26 03:32:29.769 | 49 | WARN | STARTUP | <<start-node-0>> | PcesFileTracker: | No preconsensus event files available | |
| node0 | 6.564s | 2025-11-26 03:32:29.769 | 50 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node0 | 6.566s | 2025-11-26 03:32:29.771 | 51 | INFO | STARTUP | <<start-node-0>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node0 | 6.566s | 2025-11-26 03:32:29.771 | 52 | INFO | STARTUP | <<app: appMain 0>> | ConsistencyTestingToolMain: | run called in Main. | |
| node0 | 6.569s | 2025-11-26 03:32:29.774 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 182.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node0 | 6.573s | 2025-11-26 03:32:29.778 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node4 | 7.100s | 2025-11-26 03:32:30.305 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 7.101s | 2025-11-26 03:32:30.306 | 26 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 7.107s | 2025-11-26 03:32:30.312 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node4 | 7.116s | 2025-11-26 03:32:30.321 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 7.118s | 2025-11-26 03:32:30.323 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 7.517s | 2025-11-26 03:32:30.722 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26224140] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=352269, randomLong=5463234161918785817, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=24989, randomLong=-2723970612575351206, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1697533, data=35, exception=null] OS Health Check Report - Complete (took 1026 ms) | |||||||||
| node2 | 7.552s | 2025-11-26 03:32:30.757 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node2 | 7.564s | 2025-11-26 03:32:30.769 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node2 | 7.567s | 2025-11-26 03:32:30.772 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node2 | 7.674s | 2025-11-26 03:32:30.879 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IhANzw==", "port": 30124 }, { "ipAddressV4": "CoAAOQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I8q0KA==", "port": 30125 }, { "ipAddressV4": "CoAAPQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "IilpIA==", "port": 30126 }, { "ipAddressV4": "CoAAOg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "aMbs8g==", "port": 30127 }, { "ipAddressV4": "CoAAOw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IkVH8g==", "port": 30128 }, { "ipAddressV4": "CoAAPA==", "port": 30128 }] }] } | |||||||||
| node2 | 7.701s | 2025-11-26 03:32:30.906 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv | |
| node2 | 7.702s | 2025-11-26 03:32:30.907 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node2 | 7.717s | 2025-11-26 03:32:30.922 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 25db5a13230118ba8b6bc8778dda9bc97d1bf00988e6dbc50ef8b0e21606c041ce5f51ca0df8a8b465c028d6cf9f9bf0 (root) VirtualMap state / rocket-draw-gasp-blanket | |||||||||
| node2 | 7.721s | 2025-11-26 03:32:30.926 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node2 | 7.951s | 2025-11-26 03:32:31.156 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node2 | 7.956s | 2025-11-26 03:32:31.161 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node2 | 7.961s | 2025-11-26 03:32:31.166 | 43 | INFO | STARTUP | <<start-node-2>> | ConsistencyTestingToolMain: | init called in Main for node 2. | |
| node2 | 7.962s | 2025-11-26 03:32:31.167 | 44 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | Starting platform 2 | |
| node2 | 7.963s | 2025-11-26 03:32:31.168 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node2 | 7.967s | 2025-11-26 03:32:31.172 | 46 | INFO | STARTUP | <<start-node-2>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node2 | 7.968s | 2025-11-26 03:32:31.173 | 47 | INFO | STARTUP | <<start-node-2>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node2 | 7.968s | 2025-11-26 03:32:31.173 | 48 | INFO | STARTUP | <<start-node-2>> | InputWireChecks: | All input wires have been bound. | |
| node2 | 7.971s | 2025-11-26 03:32:31.176 | 49 | WARN | STARTUP | <<start-node-2>> | PcesFileTracker: | No preconsensus event files available | |
| node2 | 7.971s | 2025-11-26 03:32:31.176 | 50 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node2 | 7.973s | 2025-11-26 03:32:31.178 | 51 | INFO | STARTUP | <<start-node-2>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node2 | 7.974s | 2025-11-26 03:32:31.179 | 52 | INFO | STARTUP | <<app: appMain 2>> | ConsistencyTestingToolMain: | run called in Main. | |
| node2 | 7.977s | 2025-11-26 03:32:31.182 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 195.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node2 | 7.982s | 2025-11-26 03:32:31.187 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node4 | 8.262s | 2025-11-26 03:32:31.467 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26226720] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=258210, randomLong=-3497036205999131404, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=31711, randomLong=6737815953284256440, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1829860, data=35, exception=null] OS Health Check Report - Complete (took 1033 ms) | |||||||||
| node4 | 8.302s | 2025-11-26 03:32:31.507 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 8.315s | 2025-11-26 03:32:31.520 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 8.317s | 2025-11-26 03:32:31.522 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 8.438s | 2025-11-26 03:32:31.643 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IhANzw==", "port": 30124 }, { "ipAddressV4": "CoAAOQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I8q0KA==", "port": 30125 }, { "ipAddressV4": "CoAAPQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "IilpIA==", "port": 30126 }, { "ipAddressV4": "CoAAOg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "aMbs8g==", "port": 30127 }, { "ipAddressV4": "CoAAOw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IkVH8g==", "port": 30128 }, { "ipAddressV4": "CoAAPA==", "port": 30128 }] }] } | |||||||||
| node4 | 8.472s | 2025-11-26 03:32:31.677 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 8.473s | 2025-11-26 03:32:31.678 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node4 | 8.492s | 2025-11-26 03:32:31.697 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 25db5a13230118ba8b6bc8778dda9bc97d1bf00988e6dbc50ef8b0e21606c041ce5f51ca0df8a8b465c028d6cf9f9bf0 (root) VirtualMap state / rocket-draw-gasp-blanket | |||||||||
| node4 | 8.498s | 2025-11-26 03:32:31.703 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node4 | 8.756s | 2025-11-26 03:32:31.961 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node4 | 8.762s | 2025-11-26 03:32:31.967 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node4 | 8.769s | 2025-11-26 03:32:31.974 | 43 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 8.769s | 2025-11-26 03:32:31.974 | 44 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 8.771s | 2025-11-26 03:32:31.976 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 8.774s | 2025-11-26 03:32:31.979 | 46 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 8.775s | 2025-11-26 03:32:31.980 | 47 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 8.776s | 2025-11-26 03:32:31.981 | 48 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 8.778s | 2025-11-26 03:32:31.983 | 49 | WARN | STARTUP | <<start-node-4>> | PcesFileTracker: | No preconsensus event files available | |
| node4 | 8.778s | 2025-11-26 03:32:31.983 | 50 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node4 | 8.780s | 2025-11-26 03:32:31.985 | 51 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node4 | 8.781s | 2025-11-26 03:32:31.986 | 52 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 8.784s | 2025-11-26 03:32:31.989 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 206.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 8.790s | 2025-11-26 03:32:31.995 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node3 | 8.952s | 2025-11-26 03:32:32.157 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ] | |
| node3 | 8.954s | 2025-11-26 03:32:32.159 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node1 | 9.474s | 2025-11-26 03:32:32.679 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ] | |
| node1 | 9.478s | 2025-11-26 03:32:32.683 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node0 | 9.573s | 2025-11-26 03:32:32.778 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ] | |
| node0 | 9.576s | 2025-11-26 03:32:32.781 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node2 | 10.977s | 2025-11-26 03:32:34.182 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ] | |
| node2 | 10.980s | 2025-11-26 03:32:34.185 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 11.784s | 2025-11-26 03:32:34.989 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 11.787s | 2025-11-26 03:32:34.992 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node3 | 16.051s | 2025-11-26 03:32:39.256 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node1 | 16.564s | 2025-11-26 03:32:39.769 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 16.665s | 2025-11-26 03:32:39.870 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node1 | 17.462s | 2025-11-26 03:32:40.667 | 59 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node0 | 17.512s | 2025-11-26 03:32:40.717 | 59 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node3 | 17.605s | 2025-11-26 03:32:40.810 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 1.6 s in CHECKING. Now in ACTIVE | |
| node3 | 17.607s | 2025-11-26 03:32:40.812 | 60 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node2 | 17.715s | 2025-11-26 03:32:40.920 | 58 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node4 | 17.739s | 2025-11-26 03:32:40.944 | 58 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node1 | 17.816s | 2025-11-26 03:32:41.021 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 | |
| node1 | 17.818s | 2025-11-26 03:32:41.023 | 75 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node3 | 17.828s | 2025-11-26 03:32:41.033 | 75 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 | |
| node3 | 17.830s | 2025-11-26 03:32:41.035 | 76 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node0 | 17.863s | 2025-11-26 03:32:41.068 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 | |
| node0 | 17.865s | 2025-11-26 03:32:41.070 | 75 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node4 | 17.875s | 2025-11-26 03:32:41.080 | 73 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |
| node4 | 17.877s | 2025-11-26 03:32:41.082 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node2 | 17.980s | 2025-11-26 03:32:41.185 | 73 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 | |
| node2 | 17.983s | 2025-11-26 03:32:41.188 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node3 | 18.055s | 2025-11-26 03:32:41.260 | 106 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node3 | 18.057s | 2025-11-26 03:32:41.262 | 107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-11-26T03:32:39.451287655Z Next consensus number: 1 Legacy running event hash: 89bbeb8631d40976f12e5a28b170c9bd903bf58bb4f65e236eb19e74ec790aca46bda0a6ce84094d1f15a6e652569fbd Legacy running event mnemonic: silly-monster-dentist-conduct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 50be2df9daf10a0c69c8bf0fe340d165d478eb5642f03066a060b16ddddfbab0f1891e65081b00f91fb6d7b17e20b092 (root) VirtualMap state / blind-sand-peanut-dose | |||||||||
| node1 | 18.058s | 2025-11-26 03:32:41.263 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node1 | 18.062s | 2025-11-26 03:32:41.267 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-11-26T03:32:39.451287655Z Next consensus number: 1 Legacy running event hash: 89bbeb8631d40976f12e5a28b170c9bd903bf58bb4f65e236eb19e74ec790aca46bda0a6ce84094d1f15a6e652569fbd Legacy running event mnemonic: silly-monster-dentist-conduct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 50be2df9daf10a0c69c8bf0fe340d165d478eb5642f03066a060b16ddddfbab0f1891e65081b00f91fb6d7b17e20b092 (root) VirtualMap state / blind-sand-peanut-dose | |||||||||
| node2 | 18.080s | 2025-11-26 03:32:41.285 | 88 | INFO | PLATFORM_STATUS | <platformForkJoinThread-8> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node3 | 18.090s | 2025-11-26 03:32:41.295 | 108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 18.091s | 2025-11-26 03:32:41.296 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 18.091s | 2025-11-26 03:32:41.296 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 18.092s | 2025-11-26 03:32:41.297 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 18.096s | 2025-11-26 03:32:41.301 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node1 | 18.096s | 2025-11-26 03:32:41.301 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 18.096s | 2025-11-26 03:32:41.301 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 18.097s | 2025-11-26 03:32:41.302 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 18.097s | 2025-11-26 03:32:41.302 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 18.098s | 2025-11-26 03:32:41.303 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 18.099s | 2025-11-26 03:32:41.304 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-11-26T03:32:39.451287655Z Next consensus number: 1 Legacy running event hash: 89bbeb8631d40976f12e5a28b170c9bd903bf58bb4f65e236eb19e74ec790aca46bda0a6ce84094d1f15a6e652569fbd Legacy running event mnemonic: silly-monster-dentist-conduct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 50be2df9daf10a0c69c8bf0fe340d165d478eb5642f03066a060b16ddddfbab0f1891e65081b00f91fb6d7b17e20b092 (root) VirtualMap state / blind-sand-peanut-dose | |||||||||
| node1 | 18.103s | 2025-11-26 03:32:41.308 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 18.112s | 2025-11-26 03:32:41.317 | 112 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 1.4 s in CHECKING. Now in ACTIVE | |
| node1 | 18.113s | 2025-11-26 03:32:41.318 | 117 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 1.5 s in CHECKING. Now in ACTIVE | |
| node4 | 18.130s | 2025-11-26 03:32:41.335 | 107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node0 | 18.132s | 2025-11-26 03:32:41.337 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 18.132s | 2025-11-26 03:32:41.337 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 18.133s | 2025-11-26 03:32:41.338 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 18.133s | 2025-11-26 03:32:41.338 | 108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-11-26T03:32:39.451287655Z Next consensus number: 1 Legacy running event hash: 89bbeb8631d40976f12e5a28b170c9bd903bf58bb4f65e236eb19e74ec790aca46bda0a6ce84094d1f15a6e652569fbd Legacy running event mnemonic: silly-monster-dentist-conduct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 50be2df9daf10a0c69c8bf0fe340d165d478eb5642f03066a060b16ddddfbab0f1891e65081b00f91fb6d7b17e20b092 (root) VirtualMap state / blind-sand-peanut-dose | |||||||||
| node0 | 18.134s | 2025-11-26 03:32:41.339 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 18.139s | 2025-11-26 03:32:41.344 | 118 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 18.169s | 2025-11-26 03:32:41.374 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 18.169s | 2025-11-26 03:32:41.374 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 18.170s | 2025-11-26 03:32:41.375 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 18.171s | 2025-11-26 03:32:41.376 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 18.178s | 2025-11-26 03:32:41.383 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 18.252s | 2025-11-26 03:32:41.457 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1 | |
| node2 | 18.256s | 2025-11-26 03:32:41.461 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-11-26T03:32:39.451287655Z Next consensus number: 1 Legacy running event hash: 89bbeb8631d40976f12e5a28b170c9bd903bf58bb4f65e236eb19e74ec790aca46bda0a6ce84094d1f15a6e652569fbd Legacy running event mnemonic: silly-monster-dentist-conduct Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 50be2df9daf10a0c69c8bf0fe340d165d478eb5642f03066a060b16ddddfbab0f1891e65081b00f91fb6d7b17e20b092 (root) VirtualMap state / blind-sand-peanut-dose | |||||||||
| node2 | 18.297s | 2025-11-26 03:32:41.502 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 18.298s | 2025-11-26 03:32:41.503 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 18.298s | 2025-11-26 03:32:41.503 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 18.299s | 2025-11-26 03:32:41.504 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 18.306s | 2025-11-26 03:32:41.511 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 18.877s | 2025-11-26 03:32:42.082 | 134 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node2 | 19.491s | 2025-11-26 03:32:42.696 | 145 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 1.4 s in CHECKING. Now in ACTIVE | |
| node4 | 20.904s | 2025-11-26 03:32:44.109 | 168 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | StatusStateMachine: | Platform spent 2.0 s in CHECKING. Now in ACTIVE | |
| node3 | 38.276s | 2025-11-26 03:33:01.481 | 589 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 46 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 38.291s | 2025-11-26 03:33:01.496 | 583 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 46 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 38.298s | 2025-11-26 03:33:01.503 | 583 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 46 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 38.312s | 2025-11-26 03:33:01.517 | 597 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 46 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 38.315s | 2025-11-26 03:33:01.520 | 584 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 46 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 38.457s | 2025-11-26 03:33:01.662 | 592 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 46 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/46 | |
| node3 | 38.458s | 2025-11-26 03:33:01.663 | 593 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node0 | 38.471s | 2025-11-26 03:33:01.676 | 600 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 46 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/46 | |
| node0 | 38.472s | 2025-11-26 03:33:01.677 | 601 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node1 | 38.510s | 2025-11-26 03:33:01.715 | 586 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 46 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/46 | |
| node1 | 38.512s | 2025-11-26 03:33:01.717 | 587 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node3 | 38.539s | 2025-11-26 03:33:01.744 | 624 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node3 | 38.541s | 2025-11-26 03:33:01.746 | 625 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 46 Timestamp: 2025-11-26T03:33:00.374037193Z Next consensus number: 1548 Legacy running event hash: 533ae0f8a86155bb8d3c33515c81151abe6945a5478ae0d36931dcc614a14dce5ffe56b2903d602a40a39b5471a84210 Legacy running event mnemonic: guard-three-view-resist Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 767280198 Root hash: 0424ee3fcd49795c95a898a03bd97a01040f935320b43455b2dae08027096c7da08e2f6472bc30fa9f00e57c4ade4b53 (root) VirtualMap state / loyal-deal-finger-clog | |||||||||
| node2 | 38.542s | 2025-11-26 03:33:01.747 | 586 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 46 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/46 | |
| node2 | 38.543s | 2025-11-26 03:33:01.748 | 587 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node3 | 38.550s | 2025-11-26 03:33:01.755 | 634 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 38.550s | 2025-11-26 03:33:01.755 | 635 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 19 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 38.551s | 2025-11-26 03:33:01.756 | 636 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 38.552s | 2025-11-26 03:33:01.757 | 637 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 38.553s | 2025-11-26 03:33:01.758 | 638 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 46 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/46 {"round":46,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/46/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 38.553s | 2025-11-26 03:33:01.758 | 587 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 46 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/46 | |
| node4 | 38.554s | 2025-11-26 03:33:01.759 | 588 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node0 | 38.564s | 2025-11-26 03:33:01.769 | 640 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node0 | 38.566s | 2025-11-26 03:33:01.771 | 641 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 46 Timestamp: 2025-11-26T03:33:00.374037193Z Next consensus number: 1548 Legacy running event hash: 533ae0f8a86155bb8d3c33515c81151abe6945a5478ae0d36931dcc614a14dce5ffe56b2903d602a40a39b5471a84210 Legacy running event mnemonic: guard-three-view-resist Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 767280198 Root hash: 0424ee3fcd49795c95a898a03bd97a01040f935320b43455b2dae08027096c7da08e2f6472bc30fa9f00e57c4ade4b53 (root) VirtualMap state / loyal-deal-finger-clog | |||||||||
| node0 | 38.577s | 2025-11-26 03:33:01.782 | 642 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 38.578s | 2025-11-26 03:33:01.783 | 643 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 19 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 38.579s | 2025-11-26 03:33:01.784 | 644 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 38.581s | 2025-11-26 03:33:01.786 | 645 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 38.581s | 2025-11-26 03:33:01.786 | 646 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 46 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/46 {"round":46,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/46/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 38.597s | 2025-11-26 03:33:01.802 | 618 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node1 | 38.600s | 2025-11-26 03:33:01.805 | 619 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 46 Timestamp: 2025-11-26T03:33:00.374037193Z Next consensus number: 1548 Legacy running event hash: 533ae0f8a86155bb8d3c33515c81151abe6945a5478ae0d36931dcc614a14dce5ffe56b2903d602a40a39b5471a84210 Legacy running event mnemonic: guard-three-view-resist Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 767280198 Root hash: 0424ee3fcd49795c95a898a03bd97a01040f935320b43455b2dae08027096c7da08e2f6472bc30fa9f00e57c4ade4b53 (root) VirtualMap state / loyal-deal-finger-clog | |||||||||
| node1 | 38.608s | 2025-11-26 03:33:01.813 | 620 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 38.609s | 2025-11-26 03:33:01.814 | 621 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 19 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 38.609s | 2025-11-26 03:33:01.814 | 622 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 38.611s | 2025-11-26 03:33:01.816 | 623 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 38.611s | 2025-11-26 03:33:01.816 | 624 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 46 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/46 {"round":46,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/46/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 38.636s | 2025-11-26 03:33:01.841 | 626 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node2 | 38.639s | 2025-11-26 03:33:01.844 | 627 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 46 Timestamp: 2025-11-26T03:33:00.374037193Z Next consensus number: 1548 Legacy running event hash: 533ae0f8a86155bb8d3c33515c81151abe6945a5478ae0d36931dcc614a14dce5ffe56b2903d602a40a39b5471a84210 Legacy running event mnemonic: guard-three-view-resist Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 767280198 Root hash: 0424ee3fcd49795c95a898a03bd97a01040f935320b43455b2dae08027096c7da08e2f6472bc30fa9f00e57c4ade4b53 (root) VirtualMap state / loyal-deal-finger-clog | |||||||||
| node4 | 38.640s | 2025-11-26 03:33:01.845 | 627 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 46 | |
| node4 | 38.642s | 2025-11-26 03:33:01.847 | 628 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 46 Timestamp: 2025-11-26T03:33:00.374037193Z Next consensus number: 1548 Legacy running event hash: 533ae0f8a86155bb8d3c33515c81151abe6945a5478ae0d36931dcc614a14dce5ffe56b2903d602a40a39b5471a84210 Legacy running event mnemonic: guard-three-view-resist Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 767280198 Root hash: 0424ee3fcd49795c95a898a03bd97a01040f935320b43455b2dae08027096c7da08e2f6472bc30fa9f00e57c4ade4b53 (root) VirtualMap state / loyal-deal-finger-clog | |||||||||
| node2 | 38.650s | 2025-11-26 03:33:01.855 | 628 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 38.650s | 2025-11-26 03:33:01.855 | 629 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 19 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 38.652s | 2025-11-26 03:33:01.857 | 630 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 38.653s | 2025-11-26 03:33:01.858 | 631 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 38.654s | 2025-11-26 03:33:01.859 | 632 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 46 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/46 {"round":46,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/46/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 38.654s | 2025-11-26 03:33:01.859 | 629 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 38.654s | 2025-11-26 03:33:01.859 | 630 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 19 File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 38.655s | 2025-11-26 03:33:01.860 | 631 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 38.657s | 2025-11-26 03:33:01.862 | 632 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 38.658s | 2025-11-26 03:33:01.863 | 633 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 46 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/46 {"round":46,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/46/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 1m 39.377s | 2025-11-26 03:34:02.582 | 2059 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 176 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 1m 39.392s | 2025-11-26 03:34:02.597 | 2083 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 176 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 1m 39.402s | 2025-11-26 03:34:02.607 | 2025 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 176 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 1m 39.419s | 2025-11-26 03:34:02.624 | 2063 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 176 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 1m 39.456s | 2025-11-26 03:34:02.661 | 2052 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 176 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 1m 39.560s | 2025-11-26 03:34:02.765 | 2095 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 176 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/176 | |
| node0 | 1m 39.561s | 2025-11-26 03:34:02.766 | 2096 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node1 | 1m 39.574s | 2025-11-26 03:34:02.779 | 2037 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 176 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/176 | |
| node1 | 1m 39.575s | 2025-11-26 03:34:02.780 | 2038 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node3 | 1m 39.585s | 2025-11-26 03:34:02.790 | 2071 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 176 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/176 | |
| node3 | 1m 39.585s | 2025-11-26 03:34:02.790 | 2072 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node0 | 1m 39.650s | 2025-11-26 03:34:02.855 | 2145 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node0 | 1m 39.652s | 2025-11-26 03:34:02.857 | 2146 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 176 Timestamp: 2025-11-26T03:34:00.381424Z Next consensus number: 6348 Legacy running event hash: 16344f648cd08bb83f3be4dc825ed39639badab4ab7d8291709c1ee77eeca2c6a1c0fb63298e67ab4d79b0574335ca88 Legacy running event mnemonic: shoot-uncle-gravity-math Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -287316284 Root hash: 48482eb484fd0cb3fc143875a4d7dc1710861d45301bbbf9d32bf1da53c5455b6fac01ee46c070e5e3d6c8df4a88168d (root) VirtualMap state / animal-story-gather-nose | |||||||||
| node0 | 1m 39.659s | 2025-11-26 03:34:02.864 | 2147 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 39.660s | 2025-11-26 03:34:02.865 | 2148 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 149 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 39.660s | 2025-11-26 03:34:02.865 | 2149 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 1m 39.664s | 2025-11-26 03:34:02.869 | 2150 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 1m 39.665s | 2025-11-26 03:34:02.870 | 2151 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 176 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/176 {"round":176,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/176/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 1m 39.666s | 2025-11-26 03:34:02.871 | 2097 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node1 | 1m 39.668s | 2025-11-26 03:34:02.873 | 2098 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 176 Timestamp: 2025-11-26T03:34:00.381424Z Next consensus number: 6348 Legacy running event hash: 16344f648cd08bb83f3be4dc825ed39639badab4ab7d8291709c1ee77eeca2c6a1c0fb63298e67ab4d79b0574335ca88 Legacy running event mnemonic: shoot-uncle-gravity-math Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -287316284 Root hash: 48482eb484fd0cb3fc143875a4d7dc1710861d45301bbbf9d32bf1da53c5455b6fac01ee46c070e5e3d6c8df4a88168d (root) VirtualMap state / animal-story-gather-nose | |||||||||
| node3 | 1m 39.679s | 2025-11-26 03:34:02.884 | 2109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node1 | 1m 39.680s | 2025-11-26 03:34:02.885 | 2099 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 39.680s | 2025-11-26 03:34:02.885 | 2100 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 149 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 39.680s | 2025-11-26 03:34:02.885 | 2101 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 1m 39.681s | 2025-11-26 03:34:02.886 | 2110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 176 Timestamp: 2025-11-26T03:34:00.381424Z Next consensus number: 6348 Legacy running event hash: 16344f648cd08bb83f3be4dc825ed39639badab4ab7d8291709c1ee77eeca2c6a1c0fb63298e67ab4d79b0574335ca88 Legacy running event mnemonic: shoot-uncle-gravity-math Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -287316284 Root hash: 48482eb484fd0cb3fc143875a4d7dc1710861d45301bbbf9d32bf1da53c5455b6fac01ee46c070e5e3d6c8df4a88168d (root) VirtualMap state / animal-story-gather-nose | |||||||||
| node1 | 1m 39.685s | 2025-11-26 03:34:02.890 | 2102 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 1m 39.686s | 2025-11-26 03:34:02.891 | 2103 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 176 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/176 {"round":176,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/176/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 1m 39.689s | 2025-11-26 03:34:02.894 | 2111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 39.689s | 2025-11-26 03:34:02.894 | 2112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 149 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 39.690s | 2025-11-26 03:34:02.895 | 2113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 1m 39.694s | 2025-11-26 03:34:02.899 | 2114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 1m 39.695s | 2025-11-26 03:34:02.900 | 2115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 176 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/176 {"round":176,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/176/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 1m 39.733s | 2025-11-26 03:34:02.938 | 2064 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 176 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/176 | |
| node4 | 1m 39.734s | 2025-11-26 03:34:02.939 | 2065 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node2 | 1m 39.778s | 2025-11-26 03:34:02.983 | 2075 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 176 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/176 | |
| node2 | 1m 39.779s | 2025-11-26 03:34:02.984 | 2076 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node4 | 1m 39.820s | 2025-11-26 03:34:03.025 | 2124 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node4 | 1m 39.822s | 2025-11-26 03:34:03.027 | 2125 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 176 Timestamp: 2025-11-26T03:34:00.381424Z Next consensus number: 6348 Legacy running event hash: 16344f648cd08bb83f3be4dc825ed39639badab4ab7d8291709c1ee77eeca2c6a1c0fb63298e67ab4d79b0574335ca88 Legacy running event mnemonic: shoot-uncle-gravity-math Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -287316284 Root hash: 48482eb484fd0cb3fc143875a4d7dc1710861d45301bbbf9d32bf1da53c5455b6fac01ee46c070e5e3d6c8df4a88168d (root) VirtualMap state / animal-story-gather-nose | |||||||||
| node4 | 1m 39.831s | 2025-11-26 03:34:03.036 | 2126 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 39.832s | 2025-11-26 03:34:03.037 | 2127 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 149 File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 39.832s | 2025-11-26 03:34:03.037 | 2128 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 1m 39.837s | 2025-11-26 03:34:03.042 | 2129 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 1m 39.837s | 2025-11-26 03:34:03.042 | 2130 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 176 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/176 {"round":176,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/176/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 1m 39.868s | 2025-11-26 03:34:03.073 | 2116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 176 | |
| node2 | 1m 39.870s | 2025-11-26 03:34:03.075 | 2117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 176 Timestamp: 2025-11-26T03:34:00.381424Z Next consensus number: 6348 Legacy running event hash: 16344f648cd08bb83f3be4dc825ed39639badab4ab7d8291709c1ee77eeca2c6a1c0fb63298e67ab4d79b0574335ca88 Legacy running event mnemonic: shoot-uncle-gravity-math Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -287316284 Root hash: 48482eb484fd0cb3fc143875a4d7dc1710861d45301bbbf9d32bf1da53c5455b6fac01ee46c070e5e3d6c8df4a88168d (root) VirtualMap state / animal-story-gather-nose | |||||||||
| node2 | 1m 39.879s | 2025-11-26 03:34:03.084 | 2118 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 39.880s | 2025-11-26 03:34:03.085 | 2119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 149 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 39.880s | 2025-11-26 03:34:03.085 | 2120 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 1m 39.884s | 2025-11-26 03:34:03.089 | 2121 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 1m 39.885s | 2025-11-26 03:34:03.090 | 2122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 176 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/176 {"round":176,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/176/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 2m 37.736s | 2025-11-26 03:35:00.941 | 3597 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 312 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 2m 37.783s | 2025-11-26 03:35:00.988 | 3602 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 312 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 2m 37.800s | 2025-11-26 03:35:01.005 | 3587 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 312 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 2m 37.800s | 2025-11-26 03:35:01.005 | 3545 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 312 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 2m 37.807s | 2025-11-26 03:35:01.012 | 3611 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 312 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 2m 38.067s | 2025-11-26 03:35:01.272 | 3605 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 312 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/312 | |
| node4 | 2m 38.067s | 2025-11-26 03:35:01.272 | 3606 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node1 | 2m 38.074s | 2025-11-26 03:35:01.279 | 3548 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 312 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/312 | |
| node1 | 2m 38.075s | 2025-11-26 03:35:01.280 | 3549 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node2 | 2m 38.076s | 2025-11-26 03:35:01.281 | 3614 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 312 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/312 | |
| node2 | 2m 38.077s | 2025-11-26 03:35:01.282 | 3615 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node0 | 2m 38.144s | 2025-11-26 03:35:01.349 | 3590 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 312 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/312 | |
| node0 | 2m 38.145s | 2025-11-26 03:35:01.350 | 3591 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node3 | 2m 38.148s | 2025-11-26 03:35:01.353 | 3610 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 312 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/312 | |
| node4 | 2m 38.148s | 2025-11-26 03:35:01.353 | 3637 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node3 | 2m 38.149s | 2025-11-26 03:35:01.354 | 3611 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node4 | 2m 38.150s | 2025-11-26 03:35:01.355 | 3638 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 312 Timestamp: 2025-11-26T03:35:00.076753450Z Next consensus number: 11072 Legacy running event hash: 379a4b01fc74660a79fc70bec58d01a76f4e0adfeb1873ee189d6a37fe456dccff34b399eaec0a19fecda5936b09866e Legacy running event mnemonic: viable-gasp-immense-scout Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 879399162 Root hash: 27780f8628ea43cf419ef3810ccdfb838f21fc8b6bfe485441b251d15c9ad4d0f1efdff8e47b67137b01e631c7db7368 (root) VirtualMap state / upper-valid-bubble-ketchup | |||||||||
| node1 | 2m 38.156s | 2025-11-26 03:35:01.361 | 3580 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node4 | 2m 38.157s | 2025-11-26 03:35:01.362 | 3639 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 38.157s | 2025-11-26 03:35:01.362 | 3640 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 285 File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 38.157s | 2025-11-26 03:35:01.362 | 3641 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 2m 38.158s | 2025-11-26 03:35:01.363 | 3581 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 312 Timestamp: 2025-11-26T03:35:00.076753450Z Next consensus number: 11072 Legacy running event hash: 379a4b01fc74660a79fc70bec58d01a76f4e0adfeb1873ee189d6a37fe456dccff34b399eaec0a19fecda5936b09866e Legacy running event mnemonic: viable-gasp-immense-scout Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 879399162 Root hash: 27780f8628ea43cf419ef3810ccdfb838f21fc8b6bfe485441b251d15c9ad4d0f1efdff8e47b67137b01e631c7db7368 (root) VirtualMap state / upper-valid-bubble-ketchup | |||||||||
| node2 | 2m 38.164s | 2025-11-26 03:35:01.369 | 3646 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node4 | 2m 38.165s | 2025-11-26 03:35:01.370 | 3642 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 2m 38.166s | 2025-11-26 03:35:01.371 | 3647 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 312 Timestamp: 2025-11-26T03:35:00.076753450Z Next consensus number: 11072 Legacy running event hash: 379a4b01fc74660a79fc70bec58d01a76f4e0adfeb1873ee189d6a37fe456dccff34b399eaec0a19fecda5936b09866e Legacy running event mnemonic: viable-gasp-immense-scout Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 879399162 Root hash: 27780f8628ea43cf419ef3810ccdfb838f21fc8b6bfe485441b251d15c9ad4d0f1efdff8e47b67137b01e631c7db7368 (root) VirtualMap state / upper-valid-bubble-ketchup | |||||||||
| node4 | 2m 38.166s | 2025-11-26 03:35:01.371 | 3643 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 312 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/312 {"round":312,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/312/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 2m 38.167s | 2025-11-26 03:35:01.372 | 3582 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 38.167s | 2025-11-26 03:35:01.372 | 3583 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 285 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 38.167s | 2025-11-26 03:35:01.372 | 3584 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 2m 38.173s | 2025-11-26 03:35:01.378 | 3656 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 38.174s | 2025-11-26 03:35:01.379 | 3657 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 285 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 38.174s | 2025-11-26 03:35:01.379 | 3658 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 2m 38.175s | 2025-11-26 03:35:01.380 | 3585 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 2m 38.176s | 2025-11-26 03:35:01.381 | 3586 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 312 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/312 {"round":312,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/312/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 2m 38.182s | 2025-11-26 03:35:01.387 | 3659 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 2m 38.182s | 2025-11-26 03:35:01.387 | 3660 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 312 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/312 {"round":312,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/312/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 2m 38.224s | 2025-11-26 03:35:01.429 | 3648 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node3 | 2m 38.226s | 2025-11-26 03:35:01.431 | 3650 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 312 Timestamp: 2025-11-26T03:35:00.076753450Z Next consensus number: 11072 Legacy running event hash: 379a4b01fc74660a79fc70bec58d01a76f4e0adfeb1873ee189d6a37fe456dccff34b399eaec0a19fecda5936b09866e Legacy running event mnemonic: viable-gasp-immense-scout Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 879399162 Root hash: 27780f8628ea43cf419ef3810ccdfb838f21fc8b6bfe485441b251d15c9ad4d0f1efdff8e47b67137b01e631c7db7368 (root) VirtualMap state / upper-valid-bubble-ketchup | |||||||||
| node3 | 2m 38.233s | 2025-11-26 03:35:01.438 | 3653 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 38.233s | 2025-11-26 03:35:01.438 | 3654 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 285 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 38.233s | 2025-11-26 03:35:01.438 | 3655 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 2m 38.241s | 2025-11-26 03:35:01.446 | 3656 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 2m 38.241s | 2025-11-26 03:35:01.446 | 3657 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 312 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/312 {"round":312,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/312/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 2m 38.261s | 2025-11-26 03:35:01.466 | 3633 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 312 | |
| node0 | 2m 38.263s | 2025-11-26 03:35:01.468 | 3634 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 312 Timestamp: 2025-11-26T03:35:00.076753450Z Next consensus number: 11072 Legacy running event hash: 379a4b01fc74660a79fc70bec58d01a76f4e0adfeb1873ee189d6a37fe456dccff34b399eaec0a19fecda5936b09866e Legacy running event mnemonic: viable-gasp-immense-scout Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 879399162 Root hash: 27780f8628ea43cf419ef3810ccdfb838f21fc8b6bfe485441b251d15c9ad4d0f1efdff8e47b67137b01e631c7db7368 (root) VirtualMap state / upper-valid-bubble-ketchup | |||||||||
| node0 | 2m 38.271s | 2025-11-26 03:35:01.476 | 3635 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 2m 38.271s | 2025-11-26 03:35:01.476 | 3636 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 285 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 2m 38.271s | 2025-11-26 03:35:01.476 | 3637 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 2m 38.279s | 2025-11-26 03:35:01.484 | 3638 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 2m 38.280s | 2025-11-26 03:35:01.485 | 3639 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 312 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/312 {"round":312,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/312/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 3m 14.581s | 2025-11-26 03:35:37.786 | 4512 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 3 to 4>> | NetworkUtils: | Connection broken: 3 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-26T03:35:37.784829447Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node1 | 3m 14.582s | 2025-11-26 03:35:37.787 | 4458 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 1 to 4>> | NetworkUtils: | Connection broken: 1 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-26T03:35:37.785855192Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node0 | 3m 14.583s | 2025-11-26 03:35:37.788 | 4502 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 0 to 4>> | NetworkUtils: | Connection broken: 0 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-26T03:35:37.785619268Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node2 | 3m 14.585s | 2025-11-26 03:35:37.790 | 4548 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 2 to 4>> | NetworkUtils: | Connection broken: 2 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-26T03:35:37.785901672Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node1 | 3m 38.039s | 2025-11-26 03:36:01.244 | 5067 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 445 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 3m 38.047s | 2025-11-26 03:36:01.252 | 5155 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 445 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 3m 38.075s | 2025-11-26 03:36:01.280 | 5127 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 445 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 3m 38.089s | 2025-11-26 03:36:01.294 | 5099 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 445 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 3m 38.223s | 2025-11-26 03:36:01.428 | 5102 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 445 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/445 | |
| node0 | 3m 38.223s | 2025-11-26 03:36:01.428 | 5103 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 445 | |
| node2 | 3m 38.231s | 2025-11-26 03:36:01.436 | 5158 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 445 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/445 | |
| node2 | 3m 38.232s | 2025-11-26 03:36:01.437 | 5159 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 445 | |
| node1 | 3m 38.260s | 2025-11-26 03:36:01.465 | 5070 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 445 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/445 | |
| node1 | 3m 38.260s | 2025-11-26 03:36:01.465 | 5071 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 445 | |
| node3 | 3m 38.293s | 2025-11-26 03:36:01.498 | 5130 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 445 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/445 | |
| node3 | 3m 38.293s | 2025-11-26 03:36:01.498 | 5131 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 445 | |
| node0 | 3m 38.306s | 2025-11-26 03:36:01.511 | 5134 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 445 | |
| node0 | 3m 38.308s | 2025-11-26 03:36:01.513 | 5135 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 445 Timestamp: 2025-11-26T03:36:00.345409035Z Next consensus number: 15344 Legacy running event hash: c37c75e5e9cce16a3c8a51eb305af5774e97e4a8ce9561897fa2ee489b91e4894c7ec68ba149f080ad52dfaba660eac8 Legacy running event mnemonic: mountain-royal-shoot-spot Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -218991225 Root hash: ca69bcdfcff7b5d5fb32ce10ba90ba731c5f44b46b6cf1acc4706ff3c0861be7505d5a79a893b0ad6c45e414b47b16d2 (root) VirtualMap state / real-goddess-abuse-earth | |||||||||
| node0 | 3m 38.314s | 2025-11-26 03:36:01.519 | 5136 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 38.315s | 2025-11-26 03:36:01.520 | 5137 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 418 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 38.315s | 2025-11-26 03:36:01.520 | 5138 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 3m 38.325s | 2025-11-26 03:36:01.530 | 5139 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 3m 38.326s | 2025-11-26 03:36:01.531 | 5140 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 445 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/445 {"round":445,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/445/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 3m 38.327s | 2025-11-26 03:36:01.532 | 5190 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 445 | |
| node2 | 3m 38.329s | 2025-11-26 03:36:01.534 | 5191 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 445 Timestamp: 2025-11-26T03:36:00.345409035Z Next consensus number: 15344 Legacy running event hash: c37c75e5e9cce16a3c8a51eb305af5774e97e4a8ce9561897fa2ee489b91e4894c7ec68ba149f080ad52dfaba660eac8 Legacy running event mnemonic: mountain-royal-shoot-spot Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -218991225 Root hash: ca69bcdfcff7b5d5fb32ce10ba90ba731c5f44b46b6cf1acc4706ff3c0861be7505d5a79a893b0ad6c45e414b47b16d2 (root) VirtualMap state / real-goddess-abuse-earth | |||||||||
| node2 | 3m 38.338s | 2025-11-26 03:36:01.543 | 5192 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 38.338s | 2025-11-26 03:36:01.543 | 5193 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 418 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 38.338s | 2025-11-26 03:36:01.543 | 5194 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 3m 38.342s | 2025-11-26 03:36:01.547 | 5106 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 445 | |
| node1 | 3m 38.344s | 2025-11-26 03:36:01.549 | 5107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 445 Timestamp: 2025-11-26T03:36:00.345409035Z Next consensus number: 15344 Legacy running event hash: c37c75e5e9cce16a3c8a51eb305af5774e97e4a8ce9561897fa2ee489b91e4894c7ec68ba149f080ad52dfaba660eac8 Legacy running event mnemonic: mountain-royal-shoot-spot Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -218991225 Root hash: ca69bcdfcff7b5d5fb32ce10ba90ba731c5f44b46b6cf1acc4706ff3c0861be7505d5a79a893b0ad6c45e414b47b16d2 (root) VirtualMap state / real-goddess-abuse-earth | |||||||||
| node2 | 3m 38.349s | 2025-11-26 03:36:01.554 | 5195 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 3m 38.350s | 2025-11-26 03:36:01.555 | 5196 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 445 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/445 {"round":445,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/445/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 3m 38.351s | 2025-11-26 03:36:01.556 | 5108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 38.351s | 2025-11-26 03:36:01.556 | 5109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 418 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 38.351s | 2025-11-26 03:36:01.556 | 5110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 3m 38.362s | 2025-11-26 03:36:01.567 | 5111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 3m 38.362s | 2025-11-26 03:36:01.567 | 5112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 445 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/445 {"round":445,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/445/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 3m 38.371s | 2025-11-26 03:36:01.576 | 5170 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 445 | |
| node3 | 3m 38.372s | 2025-11-26 03:36:01.577 | 5171 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 445 Timestamp: 2025-11-26T03:36:00.345409035Z Next consensus number: 15344 Legacy running event hash: c37c75e5e9cce16a3c8a51eb305af5774e97e4a8ce9561897fa2ee489b91e4894c7ec68ba149f080ad52dfaba660eac8 Legacy running event mnemonic: mountain-royal-shoot-spot Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -218991225 Root hash: ca69bcdfcff7b5d5fb32ce10ba90ba731c5f44b46b6cf1acc4706ff3c0861be7505d5a79a893b0ad6c45e414b47b16d2 (root) VirtualMap state / real-goddess-abuse-earth | |||||||||
| node3 | 3m 38.379s | 2025-11-26 03:36:01.584 | 5172 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 38.380s | 2025-11-26 03:36:01.585 | 5173 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 418 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 38.380s | 2025-11-26 03:36:01.585 | 5174 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 3m 38.390s | 2025-11-26 03:36:01.595 | 5175 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 3m 38.391s | 2025-11-26 03:36:01.596 | 5176 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 445 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/445 {"round":445,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/445/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 4m 37.886s | 2025-11-26 03:37:01.091 | 6701 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 583 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 4m 37.900s | 2025-11-26 03:37:01.105 | 6629 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 583 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 4m 37.935s | 2025-11-26 03:37:01.140 | 6715 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 583 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 4m 37.959s | 2025-11-26 03:37:01.164 | 6661 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 583 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 4m 38.080s | 2025-11-26 03:37:01.285 | 6664 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 583 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/583 | |
| node0 | 4m 38.081s | 2025-11-26 03:37:01.286 | 6665 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 583 | |
| node3 | 4m 38.140s | 2025-11-26 03:37:01.345 | 6704 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 583 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/583 | |
| node3 | 4m 38.140s | 2025-11-26 03:37:01.345 | 6705 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 583 | |
| node1 | 4m 38.145s | 2025-11-26 03:37:01.350 | 6632 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 583 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/583 | |
| node1 | 4m 38.145s | 2025-11-26 03:37:01.350 | 6633 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 583 | |
| node2 | 4m 38.149s | 2025-11-26 03:37:01.354 | 6718 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 583 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/583 | |
| node2 | 4m 38.150s | 2025-11-26 03:37:01.355 | 6719 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 583 | |
| node0 | 4m 38.160s | 2025-11-26 03:37:01.365 | 6696 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 583 | |
| node0 | 4m 38.162s | 2025-11-26 03:37:01.367 | 6697 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 583 Timestamp: 2025-11-26T03:37:00.224061Z Next consensus number: 18646 Legacy running event hash: e1ae96c0ec746c13883222e866a14b25c0655e036357cad6f88b211625721f9c1ad14ac24d4f71048272d8b057dfeb22 Legacy running event mnemonic: ski-chimney-immense-list Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -210999391 Root hash: e49318e10dcb17b43895ea45ea65f3b4173bfbb5524d4460bbf79e1eeaa700b9eaa221b2f74b0c664a30b3b0bd9ba510 (root) VirtualMap state / horror-bring-ride-walk | |||||||||
| node0 | 4m 38.170s | 2025-11-26 03:37:01.375 | 6706 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+36+25.652868180Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 4m 38.170s | 2025-11-26 03:37:01.375 | 6707 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 556 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+36+25.652868180Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 4m 38.170s | 2025-11-26 03:37:01.375 | 6708 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 4m 38.172s | 2025-11-26 03:37:01.377 | 6709 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 4m 38.172s | 2025-11-26 03:37:01.377 | 6710 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 583 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/583 {"round":583,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/583/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 4m 38.174s | 2025-11-26 03:37:01.379 | 6711 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 | |
| node3 | 4m 38.214s | 2025-11-26 03:37:01.419 | 6744 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 583 | |
| node3 | 4m 38.216s | 2025-11-26 03:37:01.421 | 6745 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 583 Timestamp: 2025-11-26T03:37:00.224061Z Next consensus number: 18646 Legacy running event hash: e1ae96c0ec746c13883222e866a14b25c0655e036357cad6f88b211625721f9c1ad14ac24d4f71048272d8b057dfeb22 Legacy running event mnemonic: ski-chimney-immense-list Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -210999391 Root hash: e49318e10dcb17b43895ea45ea65f3b4173bfbb5524d4460bbf79e1eeaa700b9eaa221b2f74b0c664a30b3b0bd9ba510 (root) VirtualMap state / horror-bring-ride-walk | |||||||||
| node3 | 4m 38.221s | 2025-11-26 03:37:01.426 | 6746 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+36+25.608612677Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 4m 38.222s | 2025-11-26 03:37:01.427 | 6747 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 556 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+36+25.608612677Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 4m 38.222s | 2025-11-26 03:37:01.427 | 6748 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 4m 38.223s | 2025-11-26 03:37:01.428 | 6749 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 4m 38.224s | 2025-11-26 03:37:01.429 | 6664 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 583 | |
| node3 | 4m 38.224s | 2025-11-26 03:37:01.429 | 6750 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 583 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/583 {"round":583,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/583/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 4m 38.225s | 2025-11-26 03:37:01.430 | 6751 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 | |
| node1 | 4m 38.226s | 2025-11-26 03:37:01.431 | 6665 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 583 Timestamp: 2025-11-26T03:37:00.224061Z Next consensus number: 18646 Legacy running event hash: e1ae96c0ec746c13883222e866a14b25c0655e036357cad6f88b211625721f9c1ad14ac24d4f71048272d8b057dfeb22 Legacy running event mnemonic: ski-chimney-immense-list Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -210999391 Root hash: e49318e10dcb17b43895ea45ea65f3b4173bfbb5524d4460bbf79e1eeaa700b9eaa221b2f74b0c664a30b3b0bd9ba510 (root) VirtualMap state / horror-bring-ride-walk | |||||||||
| node1 | 4m 38.233s | 2025-11-26 03:37:01.438 | 6666 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+36+25.603512864Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 4m 38.233s | 2025-11-26 03:37:01.438 | 6667 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 556 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+36+25.603512864Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 4m 38.234s | 2025-11-26 03:37:01.439 | 6668 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 4m 38.235s | 2025-11-26 03:37:01.440 | 6669 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 4m 38.236s | 2025-11-26 03:37:01.441 | 6670 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 583 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/583 {"round":583,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/583/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 4m 38.237s | 2025-11-26 03:37:01.442 | 6671 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 | |
| node2 | 4m 38.237s | 2025-11-26 03:37:01.442 | 6758 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 583 | |
| node2 | 4m 38.239s | 2025-11-26 03:37:01.444 | 6759 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 583 Timestamp: 2025-11-26T03:37:00.224061Z Next consensus number: 18646 Legacy running event hash: e1ae96c0ec746c13883222e866a14b25c0655e036357cad6f88b211625721f9c1ad14ac24d4f71048272d8b057dfeb22 Legacy running event mnemonic: ski-chimney-immense-list Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -210999391 Root hash: e49318e10dcb17b43895ea45ea65f3b4173bfbb5524d4460bbf79e1eeaa700b9eaa221b2f74b0c664a30b3b0bd9ba510 (root) VirtualMap state / horror-bring-ride-walk | |||||||||
| node2 | 4m 38.246s | 2025-11-26 03:37:01.451 | 6760 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+36+25.599323414Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 4m 38.247s | 2025-11-26 03:37:01.452 | 6761 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 556 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+36+25.599323414Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 4m 38.247s | 2025-11-26 03:37:01.452 | 6762 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 4m 38.248s | 2025-11-26 03:37:01.453 | 6763 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 4m 38.249s | 2025-11-26 03:37:01.454 | 6764 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 583 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/583 {"round":583,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/583/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 4m 38.250s | 2025-11-26 03:37:01.455 | 6765 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 | |
| node2 | 5m 37.797s | 2025-11-26 03:38:01.002 | 8294 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 721 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 5m 37.840s | 2025-11-26 03:38:01.045 | 8314 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 721 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 5m 37.865s | 2025-11-26 03:38:01.070 | 8312 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 721 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 5m 37.900s | 2025-11-26 03:38:01.105 | 8188 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 721 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 5m 38.050s | 2025-11-26 03:38:01.255 | 8191 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 721 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/721 | |
| node1 | 5m 38.051s | 2025-11-26 03:38:01.256 | 8192 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 721 | |
| node3 | 5m 38.065s | 2025-11-26 03:38:01.270 | 8315 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 721 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/721 | |
| node3 | 5m 38.066s | 2025-11-26 03:38:01.271 | 8316 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 721 | |
| node2 | 5m 38.122s | 2025-11-26 03:38:01.327 | 8297 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 721 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/721 | |
| node2 | 5m 38.123s | 2025-11-26 03:38:01.328 | 8298 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 721 | |
| node1 | 5m 38.131s | 2025-11-26 03:38:01.336 | 8231 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 721 | |
| node1 | 5m 38.133s | 2025-11-26 03:38:01.338 | 8232 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 721 Timestamp: 2025-11-26T03:38:00.160400Z Next consensus number: 21872 Legacy running event hash: b21a7849dfd8687e832a760470de88ed030c2527e43c261107f673257ed7e697124810729b03fcf4f6845dbc4db78204 Legacy running event mnemonic: rubber-ankle-agree-average Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 154053628 Root hash: c140b27502b81025097b9793050f851c4d245aad5df56fe29edc80511daaa2f1e2d9cb37df2f760d72d749b4b4f43d5f (root) VirtualMap state / submit-ramp-finish-again | |||||||||
| node0 | 5m 38.135s | 2025-11-26 03:38:01.340 | 8317 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 721 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/721 | |
| node0 | 5m 38.136s | 2025-11-26 03:38:01.341 | 8318 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 721 | |
| node1 | 5m 38.140s | 2025-11-26 03:38:01.345 | 8233 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+36+25.603512864Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 5m 38.140s | 2025-11-26 03:38:01.345 | 8234 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 694 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+36+25.603512864Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 5m 38.140s | 2025-11-26 03:38:01.345 | 8235 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 5m 38.144s | 2025-11-26 03:38:01.349 | 8236 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 5m 38.145s | 2025-11-26 03:38:01.350 | 8237 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 721 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/721 {"round":721,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/721/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 5m 38.145s | 2025-11-26 03:38:01.350 | 8347 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 721 | |
| node1 | 5m 38.146s | 2025-11-26 03:38:01.351 | 8238 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/46 | |
| node3 | 5m 38.146s | 2025-11-26 03:38:01.351 | 8348 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 721 Timestamp: 2025-11-26T03:38:00.160400Z Next consensus number: 21872 Legacy running event hash: b21a7849dfd8687e832a760470de88ed030c2527e43c261107f673257ed7e697124810729b03fcf4f6845dbc4db78204 Legacy running event mnemonic: rubber-ankle-agree-average Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 154053628 Root hash: c140b27502b81025097b9793050f851c4d245aad5df56fe29edc80511daaa2f1e2d9cb37df2f760d72d749b4b4f43d5f (root) VirtualMap state / submit-ramp-finish-again | |||||||||
| node3 | 5m 38.153s | 2025-11-26 03:38:01.358 | 8357 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+36+25.608612677Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 5m 38.153s | 2025-11-26 03:38:01.358 | 8358 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 694 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+36+25.608612677Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 5m 38.153s | 2025-11-26 03:38:01.358 | 8359 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 5m 38.157s | 2025-11-26 03:38:01.362 | 8360 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 5m 38.158s | 2025-11-26 03:38:01.363 | 8361 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 721 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/721 {"round":721,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/721/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 5m 38.159s | 2025-11-26 03:38:01.364 | 8362 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/46 | |
| node2 | 5m 38.210s | 2025-11-26 03:38:01.415 | 8337 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 721 | |
| node2 | 5m 38.213s | 2025-11-26 03:38:01.418 | 8338 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 721 Timestamp: 2025-11-26T03:38:00.160400Z Next consensus number: 21872 Legacy running event hash: b21a7849dfd8687e832a760470de88ed030c2527e43c261107f673257ed7e697124810729b03fcf4f6845dbc4db78204 Legacy running event mnemonic: rubber-ankle-agree-average Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 154053628 Root hash: c140b27502b81025097b9793050f851c4d245aad5df56fe29edc80511daaa2f1e2d9cb37df2f760d72d749b4b4f43d5f (root) VirtualMap state / submit-ramp-finish-again | |||||||||
| node0 | 5m 38.215s | 2025-11-26 03:38:01.420 | 8357 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 721 | |
| node0 | 5m 38.217s | 2025-11-26 03:38:01.422 | 8358 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 721 Timestamp: 2025-11-26T03:38:00.160400Z Next consensus number: 21872 Legacy running event hash: b21a7849dfd8687e832a760470de88ed030c2527e43c261107f673257ed7e697124810729b03fcf4f6845dbc4db78204 Legacy running event mnemonic: rubber-ankle-agree-average Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 154053628 Root hash: c140b27502b81025097b9793050f851c4d245aad5df56fe29edc80511daaa2f1e2d9cb37df2f760d72d749b4b4f43d5f (root) VirtualMap state / submit-ramp-finish-again | |||||||||
| node2 | 5m 38.219s | 2025-11-26 03:38:01.424 | 8339 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+36+25.599323414Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 5m 38.219s | 2025-11-26 03:38:01.424 | 8340 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 694 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+36+25.599323414Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 5m 38.219s | 2025-11-26 03:38:01.424 | 8341 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 5m 38.223s | 2025-11-26 03:38:01.428 | 8342 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 5m 38.224s | 2025-11-26 03:38:01.429 | 8343 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 721 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/721 {"round":721,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/721/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 5m 38.225s | 2025-11-26 03:38:01.430 | 8359 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+36+25.652868180Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 5m 38.225s | 2025-11-26 03:38:01.430 | 8360 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 694 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+36+25.652868180Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 5m 38.225s | 2025-11-26 03:38:01.430 | 8361 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 5m 38.225s | 2025-11-26 03:38:01.430 | 8344 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/46 | |
| node0 | 5m 38.229s | 2025-11-26 03:38:01.434 | 8362 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 5m 38.229s | 2025-11-26 03:38:01.434 | 8363 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 721 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/721 {"round":721,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/721/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 5m 38.231s | 2025-11-26 03:38:01.436 | 8364 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/46 | |
| node4 | 5m 53.471s | 2025-11-26 03:38:16.676 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node4 | 5m 53.561s | 2025-11-26 03:38:16.766 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node4 | 5m 53.576s | 2025-11-26 03:38:16.781 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 5m 53.691s | 2025-11-26 03:38:16.896 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 5m 53.721s | 2025-11-26 03:38:16.926 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node4 | 5m 55.018s | 2025-11-26 03:38:18.223 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1296ms | |
| node4 | 5m 55.026s | 2025-11-26 03:38:18.231 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 5m 55.029s | 2025-11-26 03:38:18.234 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 5m 55.064s | 2025-11-26 03:38:18.269 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 5m 55.133s | 2025-11-26 03:38:18.338 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 5m 55.134s | 2025-11-26 03:38:18.339 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 5m 57.182s | 2025-11-26 03:38:20.387 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 5m 57.268s | 2025-11-26 03:38:20.473 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 57.275s | 2025-11-26 03:38:20.480 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | The following saved states were found on disk: | |
| - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/312/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/176/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/46/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/SignedState.swh | |||||||||
| node4 | 5m 57.275s | 2025-11-26 03:38:20.480 | 17 | INFO | STARTUP | <main> | StartupStateUtils: | Loading latest state from disk. | |
| node4 | 5m 57.276s | 2025-11-26 03:38:20.481 | 18 | INFO | STARTUP | <main> | StartupStateUtils: | Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/312/SignedState.swh | |
| node4 | 5m 57.284s | 2025-11-26 03:38:20.489 | 19 | INFO | STATE_TO_DISK | <main> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp | |
| node4 | 5m 57.397s | 2025-11-26 03:38:20.602 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 5m 58.126s | 2025-11-26 03:38:21.331 | 31 | INFO | STARTUP | <main> | StartupStateUtils: | Loaded state's hash is the same as when it was saved. | |
| node4 | 5m 58.131s | 2025-11-26 03:38:21.336 | 32 | INFO | STARTUP | <main> | StartupStateUtils: | Platform has loaded a saved state {"round":312,"consensusTimestamp":"2025-11-26T03:35:00.076753450Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload] | |
| node4 | 5m 58.135s | 2025-11-26 03:38:21.340 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 58.135s | 2025-11-26 03:38:21.340 | 37 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 5m 58.140s | 2025-11-26 03:38:21.345 | 39 | INFO | STARTUP | <main> | AddressBookInitializer: | Using the loaded state's address book and weight values. | |
| node4 | 5m 58.147s | 2025-11-26 03:38:21.352 | 40 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 58.149s | 2025-11-26 03:38:21.354 | 41 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 5m 59.252s | 2025-11-26 03:38:22.457 | 42 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26247739] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=226650, randomLong=2249593055856645395, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=7550, randomLong=-3767938745152715393, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1229740, data=35, exception=null] OS Health Check Report - Complete (took 1021 ms) | |||||||||
| node4 | 5m 59.282s | 2025-11-26 03:38:22.487 | 43 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 5m 59.405s | 2025-11-26 03:38:22.610 | 44 | INFO | STARTUP | <main> | PcesUtilities: | Span compaction completed for data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 391 | |
| node4 | 5m 59.408s | 2025-11-26 03:38:22.613 | 45 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 5m 59.409s | 2025-11-26 03:38:22.614 | 46 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 5m 59.489s | 2025-11-26 03:38:22.694 | 47 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "IhANzw==", "port": 30124 }, { "ipAddressV4": "CoAAOQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "I8q0KA==", "port": 30125 }, { "ipAddressV4": "CoAAPQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "IilpIA==", "port": 30126 }, { "ipAddressV4": "CoAAOg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "aMbs8g==", "port": 30127 }, { "ipAddressV4": "CoAAOw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IkVH8g==", "port": 30128 }, { "ipAddressV4": "CoAAPA==", "port": 30128 }] }] } | |||||||||
| node4 | 5m 59.512s | 2025-11-26 03:38:22.717 | 48 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with state long -1973064507883607018. | |
| node4 | 5m 59.513s | 2025-11-26 03:38:22.718 | 49 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with 312 rounds handled. | |
| node4 | 5m 59.513s | 2025-11-26 03:38:22.718 | 50 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 5m 59.514s | 2025-11-26 03:38:22.719 | 51 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 5m 59.552s | 2025-11-26 03:38:22.757 | 52 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 312 Timestamp: 2025-11-26T03:35:00.076753450Z Next consensus number: 11072 Legacy running event hash: 379a4b01fc74660a79fc70bec58d01a76f4e0adfeb1873ee189d6a37fe456dccff34b399eaec0a19fecda5936b09866e Legacy running event mnemonic: viable-gasp-immense-scout Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 879399162 Root hash: 27780f8628ea43cf419ef3810ccdfb838f21fc8b6bfe485441b251d15c9ad4d0f1efdff8e47b67137b01e631c7db7368 (root) VirtualMap state / upper-valid-bubble-ketchup | |||||||||
| node4 | 5m 59.557s | 2025-11-26 03:38:22.762 | 54 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node4 | 5m 59.758s | 2025-11-26 03:38:22.963 | 55 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 379a4b01fc74660a79fc70bec58d01a76f4e0adfeb1873ee189d6a37fe456dccff34b399eaec0a19fecda5936b09866e | |
| node4 | 5m 59.766s | 2025-11-26 03:38:22.971 | 56 | INFO | STARTUP | <platformForkJoinThread-4> | Shadowgraph: | Shadowgraph starting from expiration threshold 285 | |
| node4 | 5m 59.771s | 2025-11-26 03:38:22.976 | 58 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 5m 59.771s | 2025-11-26 03:38:22.976 | 59 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 5m 59.772s | 2025-11-26 03:38:22.977 | 60 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 5m 59.775s | 2025-11-26 03:38:22.980 | 61 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 5m 59.776s | 2025-11-26 03:38:22.981 | 62 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 5m 59.776s | 2025-11-26 03:38:22.981 | 63 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 5m 59.778s | 2025-11-26 03:38:22.983 | 64 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 285 | |
| node4 | 5m 59.783s | 2025-11-26 03:38:22.988 | 65 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | StatusStateMachine: | Platform spent 165.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 6.001m | 2025-11-26 03:38:23.238 | 66 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:4 H:ff10e7094edd BR:309), num remaining: 4 | |
| node4 | 6.001m | 2025-11-26 03:38:23.239 | 67 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:ecf1310e463e BR:309), num remaining: 3 | |
| node4 | 6.001m | 2025-11-26 03:38:23.240 | 68 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:a2d490398438 BR:309), num remaining: 2 | |
| node4 | 6.001m | 2025-11-26 03:38:23.240 | 69 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:545ff7fa372b BR:309), num remaining: 1 | |
| node4 | 6.001m | 2025-11-26 03:38:23.240 | 70 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:3f27067573b6 BR:309), num remaining: 0 | |
| node4 | 6.011m | 2025-11-26 03:38:23.842 | 719 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 3,912 preconsensus events with max birth round 391. These events contained 5,418 transactions. 78 rounds reached consensus spanning 36.0 seconds of consensus time. The latest round to reach consensus is round 390. Replay took 858.0 milliseconds. | |
| node4 | 6.011m | 2025-11-26 03:38:23.845 | 720 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 6.011m | 2025-11-26 03:38:23.849 | 721 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 860.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node4 | 6m 1.562s | 2025-11-26 03:38:24.767 | 780 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Preparing for reconnect, stopping gossip | |
| node4 | 6m 1.562s | 2025-11-26 03:38:24.767 | 783 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=390,ancientThreshold=363,expiredThreshold=289] remote ev=EventWindow[latestConsensusRound=775,ancientThreshold=748,expiredThreshold=674] | |
| node4 | 6m 1.562s | 2025-11-26 03:38:24.767 | 784 | INFO | RECONNECT | <<platform-core: SyncProtocolWith0 4 to 0>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=390,ancientThreshold=363,expiredThreshold=289] remote ev=EventWindow[latestConsensusRound=775,ancientThreshold=748,expiredThreshold=674] | |
| node4 | 6m 1.562s | 2025-11-26 03:38:24.767 | 781 | INFO | RECONNECT | <<platform-core: SyncProtocolWith3 4 to 3>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=390,ancientThreshold=363,expiredThreshold=289] remote ev=EventWindow[latestConsensusRound=775,ancientThreshold=748,expiredThreshold=674] | |
| node4 | 6m 1.562s | 2025-11-26 03:38:24.767 | 782 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=390,ancientThreshold=363,expiredThreshold=289] remote ev=EventWindow[latestConsensusRound=775,ancientThreshold=748,expiredThreshold=674] | |
| node4 | 6m 1.563s | 2025-11-26 03:38:24.768 | 785 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Preparing for reconnect, start clearing queues | |
| node4 | 6m 1.563s | 2025-11-26 03:38:24.768 | 786 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 918.0 ms in OBSERVING. Now in BEHIND | |
| node2 | 6m 1.632s | 2025-11-26 03:38:24.837 | 8993 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=775,ancientThreshold=748,expiredThreshold=674] remote ev=EventWindow[latestConsensusRound=390,ancientThreshold=363,expiredThreshold=289] | |
| node3 | 6m 1.632s | 2025-11-26 03:38:24.837 | 8957 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 3 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=775,ancientThreshold=748,expiredThreshold=674] remote ev=EventWindow[latestConsensusRound=390,ancientThreshold=363,expiredThreshold=289] | |
| node0 | 6m 1.633s | 2025-11-26 03:38:24.838 | 9017 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 0 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=775,ancientThreshold=748,expiredThreshold=674] remote ev=EventWindow[latestConsensusRound=390,ancientThreshold=363,expiredThreshold=289] | |
| node1 | 6m 1.633s | 2025-11-26 03:38:24.838 | 8825 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=775,ancientThreshold=748,expiredThreshold=674] remote ev=EventWindow[latestConsensusRound=390,ancientThreshold=363,expiredThreshold=289] | |
| node4 | 6m 1.715s | 2025-11-26 03:38:24.920 | 787 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Queues have been cleared | |
| node4 | 6m 1.716s | 2025-11-26 03:38:24.921 | 788 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Waiting for a state to be obtained from a peer | |
| node1 | 6m 1.811s | 2025-11-26 03:38:25.016 | 8837 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectStateTeacher: | Starting reconnect in the role of the sender {"receiving":false,"nodeId":1,"otherNodeId":4,"round":775} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node1 | 6m 1.811s | 2025-11-26 03:38:25.016 | 8838 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectStateTeacher: | The following state will be sent to the learner: | |
| Round: 775 Timestamp: 2025-11-26T03:38:23.624031Z Next consensus number: 23170 Legacy running event hash: 595b22e766b637857041374aaf0fd2f6c1c32123d8a497c86bf83ad6128f997c4650973c22dc3d8e23f7a1e539e1edde Legacy running event mnemonic: clock-friend-strategy-vague Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1733406594 Root hash: 6f235a2a2409e0fed23df9e0adde30ce318a8d899e27d2781eb9fa3dd507dc2de92c6d333dca2fc9dabb173cd146fc13 (root) VirtualMap state / tongue-earn-license-theory | |||||||||
| node1 | 6m 1.812s | 2025-11-26 03:38:25.017 | 8839 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectStateTeacher: | Sending signatures from nodes 0, 1, 3 (signing weight = 37500000000/50000000000) for state hash 6f235a2a2409e0fed23df9e0adde30ce318a8d899e27d2781eb9fa3dd507dc2de92c6d333dca2fc9dabb173cd146fc13 | |
| node1 | 6m 1.812s | 2025-11-26 03:38:25.017 | 8840 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectStateTeacher: | Starting synchronization in the role of the sender. | |
| node4 | 6m 1.879s | 2025-11-26 03:38:25.084 | 789 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | ReconnectStatePeerProtocol: | Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":1,"round":390} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node4 | 6m 1.880s | 2025-11-26 03:38:25.085 | 790 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | ReconnectStateLearner: | Receiving signed state signatures | |
| node4 | 6m 1.883s | 2025-11-26 03:38:25.088 | 791 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | ReconnectStateLearner: | Received signatures from nodes 0, 1, 3 | |
| node1 | 6m 1.934s | 2025-11-26 03:38:25.139 | 8856 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | TeachingSynchronizer: | sending tree rooted at com.swirlds.virtualmap.VirtualMap with route [] | |
| node1 | 6m 1.943s | 2025-11-26 03:38:25.148 | 8857 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@6464e8bc start run() | |
| node4 | 6m 2.092s | 2025-11-26 03:38:25.297 | 818 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | learner calls receiveTree() | |
| node4 | 6m 2.093s | 2025-11-26 03:38:25.298 | 819 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | synchronizing tree | |
| node4 | 6m 2.094s | 2025-11-26 03:38:25.299 | 820 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route [] | |
| node4 | 6m 2.102s | 2025-11-26 03:38:25.307 | 821 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@1a0016f1 start run() | |
| node4 | 6m 2.161s | 2025-11-26 03:38:25.366 | 822 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): firstLeafPath: 4 -> 4, lastLeafPath: 8 -> 8 | |
| node4 | 6m 2.161s | 2025-11-26 03:38:25.366 | 823 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): done | |
| node4 | 6m 2.315s | 2025-11-26 03:38:25.520 | 824 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread finished the learning loop for the current subtree | |
| node4 | 6m 2.316s | 2025-11-26 03:38:25.521 | 825 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call nodeRemover.allNodesReceived() | |
| node4 | 6m 2.316s | 2025-11-26 03:38:25.521 | 826 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived() | |
| node4 | 6m 2.316s | 2025-11-26 03:38:25.521 | 827 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived(): done | |
| node4 | 6m 2.317s | 2025-11-26 03:38:25.522 | 828 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call root.endLearnerReconnect() | |
| node4 | 6m 2.317s | 2025-11-26 03:38:25.522 | 829 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call reconnectIterator.close() | |
| node4 | 6m 2.317s | 2025-11-26 03:38:25.522 | 830 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call setHashPrivate() | |
| node4 | 6m 2.340s | 2025-11-26 03:38:25.545 | 840 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call postInit() | |
| node4 | 6m 2.341s | 2025-11-26 03:38:25.546 | 842 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | endLearnerReconnect() complete | |
| node4 | 6m 2.342s | 2025-11-26 03:38:25.547 | 843 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | close() complete | |
| node4 | 6m 2.342s | 2025-11-26 03:38:25.547 | 844 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread closed input, output, and view for the current subtree | |
| node4 | 6m 2.342s | 2025-11-26 03:38:25.547 | 845 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@1a0016f1 finish run() | |
| node4 | 6m 2.343s | 2025-11-26 03:38:25.548 | 846 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | received tree rooted at com.swirlds.virtualmap.VirtualMap with route [] | |
| node4 | 6m 2.344s | 2025-11-26 03:38:25.549 | 847 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | synchronization complete | |
| node4 | 6m 2.344s | 2025-11-26 03:38:25.549 | 848 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | learner calls initialize() | |
| node4 | 6m 2.344s | 2025-11-26 03:38:25.549 | 849 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | initializing tree | |
| node4 | 6m 2.345s | 2025-11-26 03:38:25.550 | 850 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | initialization complete | |
| node4 | 6m 2.345s | 2025-11-26 03:38:25.550 | 851 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | learner calls hash() | |
| node4 | 6m 2.345s | 2025-11-26 03:38:25.550 | 852 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | hashing tree | |
| node4 | 6m 2.345s | 2025-11-26 03:38:25.550 | 853 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | hashing complete | |
| node4 | 6m 2.345s | 2025-11-26 03:38:25.550 | 854 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | learner calls logStatistics() | |
| node4 | 6m 2.348s | 2025-11-26 03:38:25.553 | 855 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | Finished synchronization {"timeInSeconds":0.251,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":9,"leafNodes":5,"redundantLeafNodes":2,"internalNodes":4,"redundantInternalNodes":0} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload] | |
| node4 | 6m 2.349s | 2025-11-26 03:38:25.554 | 856 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | ReconnectMapMetrics: transfersFromTeacher=9; transfersFromLearner=8; internalHashes=3; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=5; leafCleanHashes=2; leafData=5; leafCleanData=2 | |
| node4 | 6m 2.349s | 2025-11-26 03:38:25.554 | 857 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | LearningSynchronizer: | learner is done synchronizing | |
| node4 | 6m 2.350s | 2025-11-26 03:38:25.555 | 858 | INFO | STARTUP | <<platform-core: SyncProtocolWith1 4 to 1>> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 6m 2.354s | 2025-11-26 03:38:25.559 | 859 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | ReconnectStateLearner: | Reconnect data usage report {"dataMegabytes":0.0058650970458984375} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload] | |
| node1 | 6m 2.376s | 2025-11-26 03:38:25.581 | 8861 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@6464e8bc finish run() | |
| node1 | 6m 2.379s | 2025-11-26 03:38:25.584 | 8862 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | TeachingSynchronizer: | finished sending tree | |
| node1 | 6m 2.382s | 2025-11-26 03:38:25.587 | 8865 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectStateTeacher: | Finished synchronization in the role of the sender. | |
| node1 | 6m 2.427s | 2025-11-26 03:38:25.632 | 8866 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | ReconnectStateTeacher: | Finished reconnect in the role of the sender. {"receiving":false,"nodeId":1,"otherNodeId":4,"round":775} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 6m 2.458s | 2025-11-26 03:38:25.663 | 860 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | ReconnectStatePeerProtocol: | Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":1,"round":775} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 6m 2.459s | 2025-11-26 03:38:25.664 | 861 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | ReconnectStatePeerProtocol: | Information for state received during reconnect: | |
| Round: 775 Timestamp: 2025-11-26T03:38:23.624031Z Next consensus number: 23170 Legacy running event hash: 595b22e766b637857041374aaf0fd2f6c1c32123d8a497c86bf83ad6128f997c4650973c22dc3d8e23f7a1e539e1edde Legacy running event mnemonic: clock-friend-strategy-vague Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1733406594 Root hash: 6f235a2a2409e0fed23df9e0adde30ce318a8d899e27d2781eb9fa3dd507dc2de92c6d333dca2fc9dabb173cd146fc13 (root) VirtualMap state / tongue-earn-license-theory | |||||||||
| node4 | 6m 2.460s | 2025-11-26 03:38:25.665 | 862 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | A state was obtained from a peer | |
| node4 | 6m 2.462s | 2025-11-26 03:38:25.667 | 863 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | The state obtained from a peer was validated | |
| node4 | 6m 2.462s | 2025-11-26 03:38:25.667 | 865 | DEBUG | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | `loadState` : reloading state | |
| node4 | 6m 2.463s | 2025-11-26 03:38:25.668 | 866 | INFO | STARTUP | <<platform-core: reconnectController>> | ConsistencyTestingToolState: | State initialized with state long -4942408489485224543. | |
| node4 | 6m 2.463s | 2025-11-26 03:38:25.668 | 867 | INFO | STARTUP | <<platform-core: reconnectController>> | ConsistencyTestingToolState: | State initialized with 775 rounds handled. | |
| node4 | 6m 2.463s | 2025-11-26 03:38:25.668 | 868 | INFO | STARTUP | <<platform-core: reconnectController>> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 6m 2.463s | 2025-11-26 03:38:25.668 | 869 | INFO | STARTUP | <<platform-core: reconnectController>> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 6m 2.487s | 2025-11-26 03:38:25.692 | 874 | INFO | STATE_TO_DISK | <<platform-core: reconnectController>> | DefaultSavedStateController: | Signed state from round 775 created, will eventually be written to disk, for reason: RECONNECT | |
| node4 | 6m 2.487s | 2025-11-26 03:38:25.692 | 875 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 923.0 ms in BEHIND. Now in RECONNECT_COMPLETE | |
| node4 | 6m 2.488s | 2025-11-26 03:38:25.693 | 876 | INFO | STARTUP | <platformForkJoinThread-4> | Shadowgraph: | Shadowgraph starting from expiration threshold 748 | |
| node4 | 6m 2.490s | 2025-11-26 03:38:25.695 | 879 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 775 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/775 | |
| node4 | 6m 2.492s | 2025-11-26 03:38:25.697 | 880 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 775 | |
| node4 | 6m 2.492s | 2025-11-26 03:38:25.697 | 881 | INFO | EVENT_STREAM | <<platform-core: reconnectController>> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 595b22e766b637857041374aaf0fd2f6c1c32123d8a497c86bf83ad6128f997c4650973c22dc3d8e23f7a1e539e1edde | |
| node4 | 6m 2.493s | 2025-11-26 03:38:25.698 | 882 | INFO | STARTUP | <platformForkJoinThread-1> | PcesFileManager: | Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr391_orgn0.pces. All future files will have an origin round of 775. | |
| node4 | 6m 2.494s | 2025-11-26 03:38:25.699 | 883 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Reconnect almost done resuming gossip | |
| node4 | 6m 2.658s | 2025-11-26 03:38:25.863 | 918 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 775 | |
| node4 | 6m 2.661s | 2025-11-26 03:38:25.866 | 919 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 775 Timestamp: 2025-11-26T03:38:23.624031Z Next consensus number: 23170 Legacy running event hash: 595b22e766b637857041374aaf0fd2f6c1c32123d8a497c86bf83ad6128f997c4650973c22dc3d8e23f7a1e539e1edde Legacy running event mnemonic: clock-friend-strategy-vague Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1733406594 Root hash: 6f235a2a2409e0fed23df9e0adde30ce318a8d899e27d2781eb9fa3dd507dc2de92c6d333dca2fc9dabb173cd146fc13 (root) VirtualMap state / tongue-earn-license-theory | |||||||||
| node4 | 6m 2.694s | 2025-11-26 03:38:25.899 | 920 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr391_orgn0.pces | |||||||||
| node4 | 6m 2.694s | 2025-11-26 03:38:25.899 | 921 | WARN | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | No preconsensus event files meeting specified criteria found to copy. Lower bound: 748 | |
| node4 | 6m 2.699s | 2025-11-26 03:38:25.904 | 922 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 775 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/775 {"round":775,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/775/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 6m 2.701s | 2025-11-26 03:38:25.906 | 923 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 213.0 ms in RECONNECT_COMPLETE. Now in CHECKING | |
| node4 | 6m 2.783s | 2025-11-26 03:38:25.988 | 924 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 6m 2.787s | 2025-11-26 03:38:25.992 | 925 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 6m 3.634s | 2025-11-26 03:38:26.839 | 926 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:9744c05c073e BR:773), num remaining: 3 | |
| node4 | 6m 3.634s | 2025-11-26 03:38:26.839 | 927 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:f481d06600ed BR:773), num remaining: 2 | |
| node4 | 6m 3.635s | 2025-11-26 03:38:26.840 | 928 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:a76cde0f252e BR:773), num remaining: 1 | |
| node4 | 6m 3.635s | 2025-11-26 03:38:26.840 | 929 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:9982b934445a BR:773), num remaining: 0 | |
| node4 | 6m 7.956s | 2025-11-26 03:38:31.161 | 1065 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 5.3 s in CHECKING. Now in ACTIVE | |
| node3 | 6m 37.986s | 2025-11-26 03:39:01.191 | 9852 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 859 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 6m 38.000s | 2025-11-26 03:39:01.205 | 9902 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 859 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 6m 38.019s | 2025-11-26 03:39:01.224 | 9749 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 859 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 6m 38.064s | 2025-11-26 03:39:01.269 | 1816 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 859 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 6m 38.066s | 2025-11-26 03:39:01.271 | 9880 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 859 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 6m 38.168s | 2025-11-26 03:39:01.373 | 9905 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 859 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/859 | |
| node0 | 6m 38.169s | 2025-11-26 03:39:01.374 | 9906 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 859 | |
| node2 | 6m 38.250s | 2025-11-26 03:39:01.455 | 9883 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 859 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/859 | |
| node2 | 6m 38.251s | 2025-11-26 03:39:01.456 | 9884 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 859 | |
| node0 | 6m 38.253s | 2025-11-26 03:39:01.458 | 9937 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 859 | |
| node0 | 6m 38.255s | 2025-11-26 03:39:01.460 | 9938 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 859 Timestamp: 2025-11-26T03:39:00.274044503Z Next consensus number: 25950 Legacy running event hash: b926dda7f1718c522e978c44b99393bbb5219e1d3fdd98fa15462dabf823f4d3908afb374f468da89e7fc08b66773adf Legacy running event mnemonic: swap-enroll-spin-into Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1443965293 Root hash: 5e232c0f71fa5a8056eec27dcd1a7b76bdb5056149161b2707d5ed83e4234caf80cafd9d1271a86461035f69990e4da4 (root) VirtualMap state / settle-limit-dismiss-exotic | |||||||||
| node0 | 6m 38.262s | 2025-11-26 03:39:01.467 | 9939 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+36+25.652868180Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 6m 38.262s | 2025-11-26 03:39:01.467 | 9940 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 832 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+36+25.652868180Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 6m 38.265s | 2025-11-26 03:39:01.470 | 9941 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 6m 38.271s | 2025-11-26 03:39:01.476 | 9942 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 6m 38.272s | 2025-11-26 03:39:01.477 | 9943 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 859 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/859 {"round":859,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/859/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 6m 38.273s | 2025-11-26 03:39:01.478 | 9944 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/176 | |
| node1 | 6m 38.320s | 2025-11-26 03:39:01.525 | 9762 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 859 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/859 | |
| node1 | 6m 38.321s | 2025-11-26 03:39:01.526 | 9763 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 859 | |
| node2 | 6m 38.332s | 2025-11-26 03:39:01.537 | 9915 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 859 | |
| node2 | 6m 38.335s | 2025-11-26 03:39:01.540 | 9916 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 859 Timestamp: 2025-11-26T03:39:00.274044503Z Next consensus number: 25950 Legacy running event hash: b926dda7f1718c522e978c44b99393bbb5219e1d3fdd98fa15462dabf823f4d3908afb374f468da89e7fc08b66773adf Legacy running event mnemonic: swap-enroll-spin-into Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1443965293 Root hash: 5e232c0f71fa5a8056eec27dcd1a7b76bdb5056149161b2707d5ed83e4234caf80cafd9d1271a86461035f69990e4da4 (root) VirtualMap state / settle-limit-dismiss-exotic | |||||||||
| node2 | 6m 38.343s | 2025-11-26 03:39:01.548 | 9917 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+36+25.599323414Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 6m 38.343s | 2025-11-26 03:39:01.548 | 9918 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 832 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+36+25.599323414Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 6m 38.346s | 2025-11-26 03:39:01.551 | 9919 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 6m 38.353s | 2025-11-26 03:39:01.558 | 9920 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 6m 38.353s | 2025-11-26 03:39:01.558 | 9921 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 859 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/859 {"round":859,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/859/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 6m 38.355s | 2025-11-26 03:39:01.560 | 9922 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/176 | |
| node4 | 6m 38.355s | 2025-11-26 03:39:01.560 | 1819 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 859 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/859 | |
| node4 | 6m 38.356s | 2025-11-26 03:39:01.561 | 1820 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 859 | |
| node1 | 6m 38.399s | 2025-11-26 03:39:01.604 | 9794 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 859 | |
| node3 | 6m 38.399s | 2025-11-26 03:39:01.604 | 9855 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 859 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/859 | |
| node3 | 6m 38.400s | 2025-11-26 03:39:01.605 | 9856 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 859 | |
| node1 | 6m 38.402s | 2025-11-26 03:39:01.607 | 9795 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 859 Timestamp: 2025-11-26T03:39:00.274044503Z Next consensus number: 25950 Legacy running event hash: b926dda7f1718c522e978c44b99393bbb5219e1d3fdd98fa15462dabf823f4d3908afb374f468da89e7fc08b66773adf Legacy running event mnemonic: swap-enroll-spin-into Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1443965293 Root hash: 5e232c0f71fa5a8056eec27dcd1a7b76bdb5056149161b2707d5ed83e4234caf80cafd9d1271a86461035f69990e4da4 (root) VirtualMap state / settle-limit-dismiss-exotic | |||||||||
| node1 | 6m 38.408s | 2025-11-26 03:39:01.613 | 9796 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+36+25.603512864Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 6m 38.408s | 2025-11-26 03:39:01.613 | 9797 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 832 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+36+25.603512864Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 6m 38.408s | 2025-11-26 03:39:01.613 | 9798 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 6m 38.415s | 2025-11-26 03:39:01.620 | 9799 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 6m 38.415s | 2025-11-26 03:39:01.620 | 9800 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 859 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/859 {"round":859,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/859/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 6m 38.417s | 2025-11-26 03:39:01.622 | 9801 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/176 | |
| node3 | 6m 38.477s | 2025-11-26 03:39:01.682 | 9890 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 859 | |
| node3 | 6m 38.479s | 2025-11-26 03:39:01.684 | 9891 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 859 Timestamp: 2025-11-26T03:39:00.274044503Z Next consensus number: 25950 Legacy running event hash: b926dda7f1718c522e978c44b99393bbb5219e1d3fdd98fa15462dabf823f4d3908afb374f468da89e7fc08b66773adf Legacy running event mnemonic: swap-enroll-spin-into Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1443965293 Root hash: 5e232c0f71fa5a8056eec27dcd1a7b76bdb5056149161b2707d5ed83e4234caf80cafd9d1271a86461035f69990e4da4 (root) VirtualMap state / settle-limit-dismiss-exotic | |||||||||
| node3 | 6m 38.486s | 2025-11-26 03:39:01.691 | 9892 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+36+25.608612677Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 6m 38.486s | 2025-11-26 03:39:01.691 | 9893 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 832 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+36+25.608612677Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 6m 38.489s | 2025-11-26 03:39:01.694 | 9894 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 6m 38.491s | 2025-11-26 03:39:01.696 | 1868 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 859 | |
| node4 | 6m 38.493s | 2025-11-26 03:39:01.698 | 1869 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 859 Timestamp: 2025-11-26T03:39:00.274044503Z Next consensus number: 25950 Legacy running event hash: b926dda7f1718c522e978c44b99393bbb5219e1d3fdd98fa15462dabf823f4d3908afb374f468da89e7fc08b66773adf Legacy running event mnemonic: swap-enroll-spin-into Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1443965293 Root hash: 5e232c0f71fa5a8056eec27dcd1a7b76bdb5056149161b2707d5ed83e4234caf80cafd9d1271a86461035f69990e4da4 (root) VirtualMap state / settle-limit-dismiss-exotic | |||||||||
| node3 | 6m 38.496s | 2025-11-26 03:39:01.701 | 9895 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 6m 38.496s | 2025-11-26 03:39:01.701 | 9896 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 859 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/859 {"round":859,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/859/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 6m 38.497s | 2025-11-26 03:39:01.702 | 9897 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/176 | |
| node4 | 6m 38.503s | 2025-11-26 03:39:01.708 | 1870 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr391_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+38+26.274262260Z_seq1_minr748_maxr1248_orgn775.pces | |||||||||
| node4 | 6m 38.503s | 2025-11-26 03:39:01.708 | 1871 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 832 File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+38+26.274262260Z_seq1_minr748_maxr1248_orgn775.pces | |||||||||
| node4 | 6m 38.503s | 2025-11-26 03:39:01.708 | 1872 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 6m 38.507s | 2025-11-26 03:39:01.712 | 1873 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 6m 38.508s | 2025-11-26 03:39:01.713 | 1874 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 859 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/859 {"round":859,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/859/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 6m 38.510s | 2025-11-26 03:39:01.715 | 1875 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |
| node1 | 7m 38.100s | 2025-11-26 03:40:01.305 | 11285 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 996 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 7m 38.107s | 2025-11-26 03:40:01.312 | 11376 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 996 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 7m 38.111s | 2025-11-26 03:40:01.316 | 11418 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 996 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 7m 38.191s | 2025-11-26 03:40:01.396 | 11400 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 996 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 7m 38.223s | 2025-11-26 03:40:01.428 | 3368 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 996 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 7m 38.272s | 2025-11-26 03:40:01.477 | 11421 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 996 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/996 | |
| node0 | 7m 38.273s | 2025-11-26 03:40:01.478 | 11422 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 996 | |
| node2 | 7m 38.304s | 2025-11-26 03:40:01.509 | 11403 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 996 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/996 | |
| node2 | 7m 38.304s | 2025-11-26 03:40:01.509 | 11404 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 996 | |
| node0 | 7m 38.354s | 2025-11-26 03:40:01.559 | 11455 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 996 | |
| node0 | 7m 38.356s | 2025-11-26 03:40:01.561 | 11456 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 996 Timestamp: 2025-11-26T03:40:00.433932Z Next consensus number: 30829 Legacy running event hash: 44288322d441b7670866c3c6af93a0e2c6eb3e1e64e3bf13738621aba4fa88194ed8c8ac840cad79c172ea7c164a7240 Legacy running event mnemonic: lunar-orange-jewel-basket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -972711963 Root hash: 5a4879efa209270fbe39dfb762364d188406d360248b1858e48375ff4ec076b5a51816a97fdf888aa2d0be239b7167a4 (root) VirtualMap state / empower-digital-same-tiger | |||||||||
| node0 | 7m 38.362s | 2025-11-26 03:40:01.567 | 11457 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+36+25.652868180Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+32+39.512332205Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 7m 38.362s | 2025-11-26 03:40:01.567 | 11458 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 968 File: data/saved/preconsensus-events/0/2025/11/26/2025-11-26T03+36+25.652868180Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 7m 38.363s | 2025-11-26 03:40:01.568 | 11459 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 7m 38.368s | 2025-11-26 03:40:01.573 | 3371 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 996 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/996 | |
| node4 | 7m 38.368s | 2025-11-26 03:40:01.573 | 3372 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 996 | |
| node0 | 7m 38.373s | 2025-11-26 03:40:01.578 | 11474 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 7m 38.373s | 2025-11-26 03:40:01.578 | 11475 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 996 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/996 {"round":996,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/996/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 7m 38.375s | 2025-11-26 03:40:01.580 | 11476 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/312 | |
| node2 | 7m 38.393s | 2025-11-26 03:40:01.598 | 11451 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 996 | |
| node2 | 7m 38.396s | 2025-11-26 03:40:01.601 | 11452 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 996 Timestamp: 2025-11-26T03:40:00.433932Z Next consensus number: 30829 Legacy running event hash: 44288322d441b7670866c3c6af93a0e2c6eb3e1e64e3bf13738621aba4fa88194ed8c8ac840cad79c172ea7c164a7240 Legacy running event mnemonic: lunar-orange-jewel-basket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -972711963 Root hash: 5a4879efa209270fbe39dfb762364d188406d360248b1858e48375ff4ec076b5a51816a97fdf888aa2d0be239b7167a4 (root) VirtualMap state / empower-digital-same-tiger | |||||||||
| node2 | 7m 38.403s | 2025-11-26 03:40:01.608 | 11453 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+36+25.599323414Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+32+39.463379624Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 7m 38.403s | 2025-11-26 03:40:01.608 | 11454 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 968 File: data/saved/preconsensus-events/2/2025/11/26/2025-11-26T03+36+25.599323414Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 7m 38.404s | 2025-11-26 03:40:01.609 | 11455 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 7m 38.408s | 2025-11-26 03:40:01.613 | 11379 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 996 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/996 | |
| node3 | 7m 38.408s | 2025-11-26 03:40:01.613 | 11380 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 996 | |
| node2 | 7m 38.414s | 2025-11-26 03:40:01.619 | 11456 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 7m 38.415s | 2025-11-26 03:40:01.620 | 11457 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 996 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/996 {"round":996,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/996/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 7m 38.417s | 2025-11-26 03:40:01.622 | 11458 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/312 | |
| node1 | 7m 38.462s | 2025-11-26 03:40:01.667 | 11288 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 996 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/996 | |
| node1 | 7m 38.462s | 2025-11-26 03:40:01.667 | 11289 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for round 996 | |
| node3 | 7m 38.486s | 2025-11-26 03:40:01.691 | 11416 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 996 | |
| node3 | 7m 38.488s | 2025-11-26 03:40:01.693 | 11417 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 996 Timestamp: 2025-11-26T03:40:00.433932Z Next consensus number: 30829 Legacy running event hash: 44288322d441b7670866c3c6af93a0e2c6eb3e1e64e3bf13738621aba4fa88194ed8c8ac840cad79c172ea7c164a7240 Legacy running event mnemonic: lunar-orange-jewel-basket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -972711963 Root hash: 5a4879efa209270fbe39dfb762364d188406d360248b1858e48375ff4ec076b5a51816a97fdf888aa2d0be239b7167a4 (root) VirtualMap state / empower-digital-same-tiger | |||||||||
| node3 | 7m 38.494s | 2025-11-26 03:40:01.699 | 11418 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+32+39.289745347Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+36+25.608612677Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 7m 38.494s | 2025-11-26 03:40:01.699 | 11419 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 968 File: data/saved/preconsensus-events/3/2025/11/26/2025-11-26T03+36+25.608612677Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 7m 38.495s | 2025-11-26 03:40:01.700 | 11420 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 7m 38.504s | 2025-11-26 03:40:01.709 | 11421 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 7m 38.505s | 2025-11-26 03:40:01.710 | 11422 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 996 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/996 {"round":996,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/996/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 7m 38.506s | 2025-11-26 03:40:01.711 | 11423 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/312 | |
| node4 | 7m 38.537s | 2025-11-26 03:40:01.742 | 3419 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 996 | |
| node4 | 7m 38.539s | 2025-11-26 03:40:01.744 | 3420 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 996 Timestamp: 2025-11-26T03:40:00.433932Z Next consensus number: 30829 Legacy running event hash: 44288322d441b7670866c3c6af93a0e2c6eb3e1e64e3bf13738621aba4fa88194ed8c8ac840cad79c172ea7c164a7240 Legacy running event mnemonic: lunar-orange-jewel-basket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -972711963 Root hash: 5a4879efa209270fbe39dfb762364d188406d360248b1858e48375ff4ec076b5a51816a97fdf888aa2d0be239b7167a4 (root) VirtualMap state / empower-digital-same-tiger | |||||||||
| node4 | 7m 38.546s | 2025-11-26 03:40:01.751 | 3421 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+32+39.660748371Z_seq0_minr1_maxr391_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+38+26.274262260Z_seq1_minr748_maxr1248_orgn775.pces | |||||||||
| node4 | 7m 38.546s | 2025-11-26 03:40:01.751 | 3422 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 968 File: data/saved/preconsensus-events/4/2025/11/26/2025-11-26T03+38+26.274262260Z_seq1_minr748_maxr1248_orgn775.pces | |||||||||
| node4 | 7m 38.546s | 2025-11-26 03:40:01.751 | 3423 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 7m 38.547s | 2025-11-26 03:40:01.752 | 11330 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | MerkleTreeSnapshotWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for round 996 | |
| node1 | 7m 38.549s | 2025-11-26 03:40:01.754 | 11331 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 996 Timestamp: 2025-11-26T03:40:00.433932Z Next consensus number: 30829 Legacy running event hash: 44288322d441b7670866c3c6af93a0e2c6eb3e1e64e3bf13738621aba4fa88194ed8c8ac840cad79c172ea7c164a7240 Legacy running event mnemonic: lunar-orange-jewel-basket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -972711963 Root hash: 5a4879efa209270fbe39dfb762364d188406d360248b1858e48375ff4ec076b5a51816a97fdf888aa2d0be239b7167a4 (root) VirtualMap state / empower-digital-same-tiger | |||||||||
| node4 | 7m 38.552s | 2025-11-26 03:40:01.757 | 3424 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 7m 38.553s | 2025-11-26 03:40:01.758 | 3425 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 996 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/996 {"round":996,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/996/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 7m 38.554s | 2025-11-26 03:40:01.759 | 3426 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/46 | |
| node1 | 7m 38.555s | 2025-11-26 03:40:01.760 | 11332 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+36+25.603512864Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+32+39.459245356Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 7m 38.555s | 2025-11-26 03:40:01.760 | 11333 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 968 File: data/saved/preconsensus-events/1/2025/11/26/2025-11-26T03+36+25.603512864Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 7m 38.555s | 2025-11-26 03:40:01.760 | 11334 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 7m 38.565s | 2025-11-26 03:40:01.770 | 11335 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 7m 38.566s | 2025-11-26 03:40:01.771 | 11336 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 996 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/996 {"round":996,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/996/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 7m 38.567s | 2025-11-26 03:40:01.772 | 11337 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/312 | |
| node2 | 7m 57.870s | 2025-11-26 03:40:21.075 | 11876 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith0 2 to 0>> | NetworkUtils: | Connection broken: 2 <- 0 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-26T03:40:21.073821252Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node2 | 7m 57.932s | 2025-11-26 03:40:21.137 | 11877 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith3 2 to 3>> | NetworkUtils: | Connection broken: 2 -> 3 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-26T03:40:21.136144501Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node2 | 7m 58.093s | 2025-11-26 03:40:21.298 | 11878 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 2 to 4>> | NetworkUtils: | Connection broken: 2 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-26T03:40:21.296153578Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||