Node ID







Columns











Log Level





Log Marker








Class


















































node1 0.000ns 2025-11-05 05:46:23.330 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node2 24.000ms 2025-11-05 05:46:23.354 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node1 87.000ms 2025-11-05 05:46:23.417 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node1 103.000ms 2025-11-05 05:46:23.433 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 112.000ms 2025-11-05 05:46:23.442 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node2 128.000ms 2025-11-05 05:46:23.458 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 213.000ms 2025-11-05 05:46:23.543 4 INFO STARTUP <main> Browser: The following nodes [1] are set to run locally
node2 237.000ms 2025-11-05 05:46:23.567 4 INFO STARTUP <main> Browser: The following nodes [2] are set to run locally
node1 244.000ms 2025-11-05 05:46:23.574 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node2 266.000ms 2025-11-05 05:46:23.596 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 316.000ms 2025-11-05 05:46:23.646 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 404.000ms 2025-11-05 05:46:23.734 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 420.000ms 2025-11-05 05:46:23.750 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 530.000ms 2025-11-05 05:46:23.860 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 559.000ms 2025-11-05 05:46:23.889 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 1.438s 2025-11-05 05:46:24.768 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node2 1.443s 2025-11-05 05:46:24.773 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1176ms
node2 1.455s 2025-11-05 05:46:24.785 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node2 1.459s 2025-11-05 05:46:24.789 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 1.485s 2025-11-05 05:46:24.815 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1240ms
node1 1.496s 2025-11-05 05:46:24.826 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node2 1.497s 2025-11-05 05:46:24.827 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node1 1.500s 2025-11-05 05:46:24.830 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 1.531s 2025-11-05 05:46:24.861 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node1 1.540s 2025-11-05 05:46:24.870 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 1.549s 2025-11-05 05:46:24.879 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 1.556s 2025-11-05 05:46:24.886 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node2 1.557s 2025-11-05 05:46:24.887 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node1 1.604s 2025-11-05 05:46:24.934 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node1 1.604s 2025-11-05 05:46:24.934 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node0 1.670s 2025-11-05 05:46:25.000 4 INFO STARTUP <main> Browser: The following nodes [0] are set to run locally
node0 1.702s 2025-11-05 05:46:25.032 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 1.799s 2025-11-05 05:46:25.129 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1240ms
node4 1.808s 2025-11-05 05:46:25.138 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 1.812s 2025-11-05 05:46:25.142 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 1.852s 2025-11-05 05:46:25.182 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 1.914s 2025-11-05 05:46:25.244 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 1.914s 2025-11-05 05:46:25.244 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node3 2.907s 2025-11-05 05:46:26.237 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node3 3.006s 2025-11-05 05:46:26.336 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node3 3.023s 2025-11-05 05:46:26.353 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 3.153s 2025-11-05 05:46:26.483 4 INFO STARTUP <main> Browser: The following nodes [3] are set to run locally
node3 3.187s 2025-11-05 05:46:26.517 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 3.309s 2025-11-05 05:46:26.639 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1606ms
node0 3.319s 2025-11-05 05:46:26.649 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node0 3.322s 2025-11-05 05:46:26.652 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 3.366s 2025-11-05 05:46:26.696 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 3.436s 2025-11-05 05:46:26.766 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node0 3.437s 2025-11-05 05:46:26.767 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node2 3.605s 2025-11-05 05:46:26.935 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node1 3.617s 2025-11-05 05:46:26.947 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node2 3.689s 2025-11-05 05:46:27.019 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 3.691s 2025-11-05 05:46:27.021 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node1 3.707s 2025-11-05 05:46:27.037 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 3.709s 2025-11-05 05:46:27.039 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node2 3.725s 2025-11-05 05:46:27.055 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 3.742s 2025-11-05 05:46:27.072 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 3.919s 2025-11-05 05:46:27.249 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 4.004s 2025-11-05 05:46:27.334 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 4.007s 2025-11-05 05:46:27.337 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node4 4.040s 2025-11-05 05:46:27.370 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 4.475s 2025-11-05 05:46:27.805 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.477s 2025-11-05 05:46:27.807 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node2 4.482s 2025-11-05 05:46:27.812 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node2 4.492s 2025-11-05 05:46:27.822 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.494s 2025-11-05 05:46:27.824 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.510s 2025-11-05 05:46:27.840 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.512s 2025-11-05 05:46:27.842 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node1 4.516s 2025-11-05 05:46:27.846 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node1 4.527s 2025-11-05 05:46:27.857 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.530s 2025-11-05 05:46:27.860 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 4.685s 2025-11-05 05:46:28.015 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1497ms
node3 4.694s 2025-11-05 05:46:28.024 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node3 4.698s 2025-11-05 05:46:28.028 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 4.736s 2025-11-05 05:46:28.066 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 4.796s 2025-11-05 05:46:28.126 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 4.799s 2025-11-05 05:46:28.129 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node3 4.800s 2025-11-05 05:46:28.130 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node3 4.801s 2025-11-05 05:46:28.131 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 4.805s 2025-11-05 05:46:28.135 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node4 4.815s 2025-11-05 05:46:28.145 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 4.817s 2025-11-05 05:46:28.147 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 5.442s 2025-11-05 05:46:28.772 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 5.534s 2025-11-05 05:46:28.864 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 5.537s 2025-11-05 05:46:28.867 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node0 5.572s 2025-11-05 05:46:28.902 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 5.609s 2025-11-05 05:46:28.939 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26200059] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=197710, randomLong=-1947539957708869057, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=12150, randomLong=6558414597405514767, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1272530, data=35, exception=null] OS Health Check Report - Complete (took 1025 ms)
node2 5.642s 2025-11-05 05:46:28.972 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node1 5.644s 2025-11-05 05:46:28.974 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26162753] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=180370, randomLong=8128855424914781345, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=14991, randomLong=-216921455924346577, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1401431, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms)
node2 5.650s 2025-11-05 05:46:28.980 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 5.651s 2025-11-05 05:46:28.981 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node1 5.674s 2025-11-05 05:46:29.004 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node1 5.682s 2025-11-05 05:46:29.012 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node1 5.683s 2025-11-05 05:46:29.013 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node2 5.739s 2025-11-05 05:46:29.069 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "I7jaQQ==", "port": 30124 }, { "ipAddressV4": "CoAAew==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ij8KNA==", "port": 30125 }, { "ipAddressV4": "CoAAfQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iixvyw==", "port": 30126 }, { "ipAddressV4": "CoAAfA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "iHcLGg==", "port": 30127 }, { "ipAddressV4": "CoAAfg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IodvrA==", "port": 30128 }, { "ipAddressV4": "CoAAeg==", "port": 30128 }] }] }
node1 5.763s 2025-11-05 05:46:29.093 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "I7jaQQ==", "port": 30124 }, { "ipAddressV4": "CoAAew==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ij8KNA==", "port": 30125 }, { "ipAddressV4": "CoAAfQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iixvyw==", "port": 30126 }, { "ipAddressV4": "CoAAfA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "iHcLGg==", "port": 30127 }, { "ipAddressV4": "CoAAfg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IodvrA==", "port": 30128 }, { "ipAddressV4": "CoAAeg==", "port": 30128 }] }] }
node2 5.765s 2025-11-05 05:46:29.095 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv
node2 5.765s 2025-11-05 05:46:29.095 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node2 5.778s 2025-11-05 05:46:29.108 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 388914719c8f04aa13461a15d534ca6c83ca09c833caaca9eb86cc1f26d6278b80236d5bef0e6aae8075d32b4ffe9d35 (root) VirtualMap state / bachelor-mixed-hotel-next
node2 5.781s 2025-11-05 05:46:29.111 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node1 5.786s 2025-11-05 05:46:29.116 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv
node1 5.786s 2025-11-05 05:46:29.116 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node1 5.797s 2025-11-05 05:46:29.127 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 388914719c8f04aa13461a15d534ca6c83ca09c833caaca9eb86cc1f26d6278b80236d5bef0e6aae8075d32b4ffe9d35 (root) VirtualMap state / bachelor-mixed-hotel-next
node1 5.800s 2025-11-05 05:46:29.130 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node4 5.949s 2025-11-05 05:46:29.279 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26367285] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=187460, randomLong=2375020419072636113, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=13010, randomLong=-828692716465707081, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1362579, data=35, exception=null] OS Health Check Report - Complete (took 1023 ms)
node4 5.981s 2025-11-05 05:46:29.311 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 5.987s 2025-11-05 05:46:29.317 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node4 5.989s 2025-11-05 05:46:29.319 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 5.991s 2025-11-05 05:46:29.321 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node4 5.991s 2025-11-05 05:46:29.321 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node2 5.996s 2025-11-05 05:46:29.326 43 INFO STARTUP <<start-node-2>> ConsistencyTestingToolMain: init called in Main for node 2.
node2 5.997s 2025-11-05 05:46:29.327 44 INFO STARTUP <<start-node-2>> SwirldsPlatform: Starting platform 2
node2 5.998s 2025-11-05 05:46:29.328 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node2 6.001s 2025-11-05 05:46:29.331 46 INFO STARTUP <<start-node-2>> CycleFinder: No cyclical back pressure detected in wiring model.
node2 6.002s 2025-11-05 05:46:29.332 47 INFO STARTUP <<start-node-2>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node2 6.003s 2025-11-05 05:46:29.333 48 INFO STARTUP <<start-node-2>> InputWireChecks: All input wires have been bound.
node1 6.004s 2025-11-05 05:46:29.334 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node2 6.004s 2025-11-05 05:46:29.334 49 WARN STARTUP <<start-node-2>> PcesFileTracker: No preconsensus event files available
node2 6.004s 2025-11-05 05:46:29.334 50 INFO STARTUP <<start-node-2>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node2 6.006s 2025-11-05 05:46:29.336 51 INFO STARTUP <<start-node-2>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node2 6.007s 2025-11-05 05:46:29.337 52 INFO STARTUP <<app: appMain 2>> ConsistencyTestingToolMain: run called in Main.
node1 6.008s 2025-11-05 05:46:29.338 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node2 6.009s 2025-11-05 05:46:29.339 53 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 174.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node1 6.012s 2025-11-05 05:46:29.342 43 INFO STARTUP <<start-node-1>> ConsistencyTestingToolMain: init called in Main for node 1.
node1 6.013s 2025-11-05 05:46:29.343 44 INFO STARTUP <<start-node-1>> SwirldsPlatform: Starting platform 1
node1 6.014s 2025-11-05 05:46:29.344 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node2 6.014s 2025-11-05 05:46:29.344 54 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node1 6.017s 2025-11-05 05:46:29.347 46 INFO STARTUP <<start-node-1>> CycleFinder: No cyclical back pressure detected in wiring model.
node1 6.018s 2025-11-05 05:46:29.348 47 INFO STARTUP <<start-node-1>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node1 6.019s 2025-11-05 05:46:29.349 48 INFO STARTUP <<start-node-1>> InputWireChecks: All input wires have been bound.
node1 6.021s 2025-11-05 05:46:29.351 49 WARN STARTUP <<start-node-1>> PcesFileTracker: No preconsensus event files available
node1 6.021s 2025-11-05 05:46:29.351 50 INFO STARTUP <<start-node-1>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node1 6.023s 2025-11-05 05:46:29.353 51 INFO STARTUP <<start-node-1>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node1 6.024s 2025-11-05 05:46:29.354 52 INFO STARTUP <<app: appMain 1>> ConsistencyTestingToolMain: run called in Main.
node1 6.026s 2025-11-05 05:46:29.356 53 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 173.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node1 6.032s 2025-11-05 05:46:29.362 54 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 6.074s 2025-11-05 05:46:29.404 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "I7jaQQ==", "port": 30124 }, { "ipAddressV4": "CoAAew==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ij8KNA==", "port": 30125 }, { "ipAddressV4": "CoAAfQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iixvyw==", "port": 30126 }, { "ipAddressV4": "CoAAfA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "iHcLGg==", "port": 30127 }, { "ipAddressV4": "CoAAfg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IodvrA==", "port": 30128 }, { "ipAddressV4": "CoAAeg==", "port": 30128 }] }] }
node4 6.098s 2025-11-05 05:46:29.428 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6.098s 2025-11-05 05:46:29.428 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node4 6.110s 2025-11-05 05:46:29.440 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 388914719c8f04aa13461a15d534ca6c83ca09c833caaca9eb86cc1f26d6278b80236d5bef0e6aae8075d32b4ffe9d35 (root) VirtualMap state / bachelor-mixed-hotel-next
node4 6.113s 2025-11-05 05:46:29.443 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node4 6.308s 2025-11-05 05:46:29.638 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node4 6.313s 2025-11-05 05:46:29.643 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node4 6.319s 2025-11-05 05:46:29.649 43 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 6.320s 2025-11-05 05:46:29.650 44 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 6.321s 2025-11-05 05:46:29.651 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 6.325s 2025-11-05 05:46:29.655 46 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 6.326s 2025-11-05 05:46:29.656 47 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 6.327s 2025-11-05 05:46:29.657 48 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 6.328s 2025-11-05 05:46:29.658 49 WARN STARTUP <<start-node-4>> PcesFileTracker: No preconsensus event files available
node4 6.329s 2025-11-05 05:46:29.659 50 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node4 6.331s 2025-11-05 05:46:29.661 51 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node4 6.333s 2025-11-05 05:46:29.663 52 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 6.335s 2025-11-05 05:46:29.665 53 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 167.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 6.340s 2025-11-05 05:46:29.670 54 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node0 6.368s 2025-11-05 05:46:29.698 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 6.371s 2025-11-05 05:46:29.701 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node0 6.378s 2025-11-05 05:46:29.708 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node0 6.392s 2025-11-05 05:46:29.722 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 6.395s 2025-11-05 05:46:29.725 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 6.907s 2025-11-05 05:46:30.237 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node3 7.019s 2025-11-05 05:46:30.349 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 7.021s 2025-11-05 05:46:30.351 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node3 7.062s 2025-11-05 05:46:30.392 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node0 7.536s 2025-11-05 05:46:30.866 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26050035] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=194730, randomLong=-5708164561034836697, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=15990, randomLong=-7927570380939472454, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1513309, data=35, exception=null] OS Health Check Report - Complete (took 1027 ms)
node0 7.569s 2025-11-05 05:46:30.899 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node0 7.578s 2025-11-05 05:46:30.908 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node0 7.580s 2025-11-05 05:46:30.910 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node0 7.669s 2025-11-05 05:46:30.999 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "I7jaQQ==", "port": 30124 }, { "ipAddressV4": "CoAAew==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ij8KNA==", "port": 30125 }, { "ipAddressV4": "CoAAfQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iixvyw==", "port": 30126 }, { "ipAddressV4": "CoAAfA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "iHcLGg==", "port": 30127 }, { "ipAddressV4": "CoAAfg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IodvrA==", "port": 30128 }, { "ipAddressV4": "CoAAeg==", "port": 30128 }] }] }
node0 7.694s 2025-11-05 05:46:31.024 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv
node0 7.695s 2025-11-05 05:46:31.025 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node0 7.708s 2025-11-05 05:46:31.038 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 388914719c8f04aa13461a15d534ca6c83ca09c833caaca9eb86cc1f26d6278b80236d5bef0e6aae8075d32b4ffe9d35 (root) VirtualMap state / bachelor-mixed-hotel-next
node0 7.712s 2025-11-05 05:46:31.042 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node0 7.951s 2025-11-05 05:46:31.281 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node0 7.956s 2025-11-05 05:46:31.286 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node3 7.959s 2025-11-05 05:46:31.289 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 7.961s 2025-11-05 05:46:31.291 43 INFO STARTUP <<start-node-0>> ConsistencyTestingToolMain: init called in Main for node 0.
node3 7.961s 2025-11-05 05:46:31.291 26 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node0 7.962s 2025-11-05 05:46:31.292 44 INFO STARTUP <<start-node-0>> SwirldsPlatform: Starting platform 0
node0 7.963s 2025-11-05 05:46:31.293 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node0 7.968s 2025-11-05 05:46:31.298 46 INFO STARTUP <<start-node-0>> CycleFinder: No cyclical back pressure detected in wiring model.
node3 7.968s 2025-11-05 05:46:31.298 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node0 7.969s 2025-11-05 05:46:31.299 47 INFO STARTUP <<start-node-0>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node0 7.970s 2025-11-05 05:46:31.300 48 INFO STARTUP <<start-node-0>> InputWireChecks: All input wires have been bound.
node0 7.972s 2025-11-05 05:46:31.302 49 WARN STARTUP <<start-node-0>> PcesFileTracker: No preconsensus event files available
node0 7.973s 2025-11-05 05:46:31.303 50 INFO STARTUP <<start-node-0>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node0 7.975s 2025-11-05 05:46:31.305 51 INFO STARTUP <<start-node-0>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node0 7.976s 2025-11-05 05:46:31.306 52 INFO STARTUP <<app: appMain 0>> ConsistencyTestingToolMain: run called in Main.
node0 7.980s 2025-11-05 05:46:31.310 53 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 209.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node3 7.981s 2025-11-05 05:46:31.311 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 7.984s 2025-11-05 05:46:31.314 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 7.987s 2025-11-05 05:46:31.317 54 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node2 9.011s 2025-11-05 05:46:32.341 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ]
node2 9.014s 2025-11-05 05:46:32.344 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node1 9.029s 2025-11-05 05:46:32.359 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ]
node1 9.032s 2025-11-05 05:46:32.362 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 9.109s 2025-11-05 05:46:32.439 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26167221] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=359320, randomLong=6796697164491561484, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=22400, randomLong=-3043953982070668760, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1777868, data=35, exception=null] OS Health Check Report - Complete (took 1026 ms)
node3 9.143s 2025-11-05 05:46:32.473 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node3 9.152s 2025-11-05 05:46:32.482 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node3 9.154s 2025-11-05 05:46:32.484 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node3 9.253s 2025-11-05 05:46:32.583 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "I7jaQQ==", "port": 30124 }, { "ipAddressV4": "CoAAew==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ij8KNA==", "port": 30125 }, { "ipAddressV4": "CoAAfQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iixvyw==", "port": 30126 }, { "ipAddressV4": "CoAAfA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "iHcLGg==", "port": 30127 }, { "ipAddressV4": "CoAAfg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IodvrA==", "port": 30128 }, { "ipAddressV4": "CoAAeg==", "port": 30128 }] }] }
node3 9.282s 2025-11-05 05:46:32.612 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv
node3 9.283s 2025-11-05 05:46:32.613 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node3 9.300s 2025-11-05 05:46:32.630 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 388914719c8f04aa13461a15d534ca6c83ca09c833caaca9eb86cc1f26d6278b80236d5bef0e6aae8075d32b4ffe9d35 (root) VirtualMap state / bachelor-mixed-hotel-next
node3 9.305s 2025-11-05 05:46:32.635 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node4 9.331s 2025-11-05 05:46:32.661 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 9.334s 2025-11-05 05:46:32.664 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 9.544s 2025-11-05 05:46:32.874 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node3 9.550s 2025-11-05 05:46:32.880 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node3 9.556s 2025-11-05 05:46:32.886 43 INFO STARTUP <<start-node-3>> ConsistencyTestingToolMain: init called in Main for node 3.
node3 9.557s 2025-11-05 05:46:32.887 44 INFO STARTUP <<start-node-3>> SwirldsPlatform: Starting platform 3
node3 9.559s 2025-11-05 05:46:32.889 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node3 9.562s 2025-11-05 05:46:32.892 46 INFO STARTUP <<start-node-3>> CycleFinder: No cyclical back pressure detected in wiring model.
node3 9.564s 2025-11-05 05:46:32.894 47 INFO STARTUP <<start-node-3>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node3 9.565s 2025-11-05 05:46:32.895 48 INFO STARTUP <<start-node-3>> InputWireChecks: All input wires have been bound.
node3 9.567s 2025-11-05 05:46:32.897 49 WARN STARTUP <<start-node-3>> PcesFileTracker: No preconsensus event files available
node3 9.568s 2025-11-05 05:46:32.898 50 INFO STARTUP <<start-node-3>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node3 9.570s 2025-11-05 05:46:32.900 51 INFO STARTUP <<start-node-3>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node3 9.571s 2025-11-05 05:46:32.901 52 INFO STARTUP <<app: appMain 3>> ConsistencyTestingToolMain: run called in Main.
node3 9.573s 2025-11-05 05:46:32.903 53 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 198.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node3 9.578s 2025-11-05 05:46:32.908 54 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node0 10.978s 2025-11-05 05:46:34.308 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ]
node0 10.981s 2025-11-05 05:46:34.311 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 12.569s 2025-11-05 05:46:35.899 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ]
node3 12.572s 2025-11-05 05:46:35.902 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node2 16.104s 2025-11-05 05:46:39.434 57 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 16.121s 2025-11-05 05:46:39.451 57 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node4 16.428s 2025-11-05 05:46:39.758 57 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node0 18.072s 2025-11-05 05:46:41.402 57 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node2 18.930s 2025-11-05 05:46:42.260 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node0 19.036s 2025-11-05 05:46:42.366 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node3 19.085s 2025-11-05 05:46:42.415 57 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node1 19.116s 2025-11-05 05:46:42.446 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node4 19.127s 2025-11-05 05:46:42.457 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node2 19.593s 2025-11-05 05:46:42.923 59 INFO PLATFORM_STATUS <platformForkJoinThread-7> StatusStateMachine: Platform spent 3.5 s in CHECKING. Now in ACTIVE
node2 19.596s 2025-11-05 05:46:42.926 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node0 19.613s 2025-11-05 05:46:42.943 59 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 1.5 s in CHECKING. Now in ACTIVE
node0 19.620s 2025-11-05 05:46:42.950 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node3 19.666s 2025-11-05 05:46:42.996 58 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 19.673s 2025-11-05 05:46:43.003 59 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 3.5 s in CHECKING. Now in ACTIVE
node1 19.675s 2025-11-05 05:46:43.005 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node4 19.708s 2025-11-05 05:46:43.038 59 INFO PLATFORM_STATUS <platformForkJoinThread-8> StatusStateMachine: Platform spent 3.3 s in CHECKING. Now in ACTIVE
node4 19.712s 2025-11-05 05:46:43.042 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node3 19.763s 2025-11-05 05:46:43.093 60 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node3 19.814s 2025-11-05 05:46:43.144 75 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2
node3 19.816s 2025-11-05 05:46:43.146 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node0 19.942s 2025-11-05 05:46:43.272 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2
node0 19.945s 2025-11-05 05:46:43.275 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node4 19.980s 2025-11-05 05:46:43.310 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node4 19.982s 2025-11-05 05:46:43.312 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node2 19.995s 2025-11-05 05:46:43.325 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2
node2 19.997s 2025-11-05 05:46:43.327 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node1 19.999s 2025-11-05 05:46:43.329 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2
node1 20.001s 2025-11-05 05:46:43.331 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node3 20.081s 2025-11-05 05:46:43.411 108 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node3 20.085s 2025-11-05 05:46:43.415 109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-11-05T05:46:41.786240Z Next consensus number: 12 Legacy running event hash: 68ffcdac671d4d5a78889440f60e099e7e5995eaf32bda0e2c98ccbc04b17c9e46243bddb1a3952feac85bc5d6ab85cd Legacy running event mnemonic: cable-enhance-together-prize Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: ebca762d0a3fe82a26243c78dbe66987f1a6c86b26742d2e446e7a82b2efac42ef3ff8840b89066d1c1e2fb7ddd5c2a8 (root) VirtualMap state / man-shield-elevator-width
node3 20.125s 2025-11-05 05:46:43.455 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 20.126s 2025-11-05 05:46:43.456 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 20.126s 2025-11-05 05:46:43.456 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 20.127s 2025-11-05 05:46:43.457 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 20.134s 2025-11-05 05:46:43.464 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 20.221s 2025-11-05 05:46:43.551 109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node0 20.222s 2025-11-05 05:46:43.552 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node4 20.224s 2025-11-05 05:46:43.554 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-11-05T05:46:41.786240Z Next consensus number: 12 Legacy running event hash: 68ffcdac671d4d5a78889440f60e099e7e5995eaf32bda0e2c98ccbc04b17c9e46243bddb1a3952feac85bc5d6ab85cd Legacy running event mnemonic: cable-enhance-together-prize Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: ebca762d0a3fe82a26243c78dbe66987f1a6c86b26742d2e446e7a82b2efac42ef3ff8840b89066d1c1e2fb7ddd5c2a8 (root) VirtualMap state / man-shield-elevator-width
node0 20.226s 2025-11-05 05:46:43.556 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-11-05T05:46:41.786240Z Next consensus number: 12 Legacy running event hash: 68ffcdac671d4d5a78889440f60e099e7e5995eaf32bda0e2c98ccbc04b17c9e46243bddb1a3952feac85bc5d6ab85cd Legacy running event mnemonic: cable-enhance-together-prize Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: ebca762d0a3fe82a26243c78dbe66987f1a6c86b26742d2e446e7a82b2efac42ef3ff8840b89066d1c1e2fb7ddd5c2a8 (root) VirtualMap state / man-shield-elevator-width
node2 20.229s 2025-11-05 05:46:43.559 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node2 20.232s 2025-11-05 05:46:43.562 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-11-05T05:46:41.786240Z Next consensus number: 12 Legacy running event hash: 68ffcdac671d4d5a78889440f60e099e7e5995eaf32bda0e2c98ccbc04b17c9e46243bddb1a3952feac85bc5d6ab85cd Legacy running event mnemonic: cable-enhance-together-prize Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: ebca762d0a3fe82a26243c78dbe66987f1a6c86b26742d2e446e7a82b2efac42ef3ff8840b89066d1c1e2fb7ddd5c2a8 (root) VirtualMap state / man-shield-elevator-width
node1 20.235s 2025-11-05 05:46:43.565 109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 2
node1 20.238s 2025-11-05 05:46:43.568 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-11-05T05:46:41.786240Z Next consensus number: 12 Legacy running event hash: 68ffcdac671d4d5a78889440f60e099e7e5995eaf32bda0e2c98ccbc04b17c9e46243bddb1a3952feac85bc5d6ab85cd Legacy running event mnemonic: cable-enhance-together-prize Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: ebca762d0a3fe82a26243c78dbe66987f1a6c86b26742d2e446e7a82b2efac42ef3ff8840b89066d1c1e2fb7ddd5c2a8 (root) VirtualMap state / man-shield-elevator-width
node4 20.266s 2025-11-05 05:46:43.596 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces
node4 20.267s 2025-11-05 05:46:43.597 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces
node4 20.268s 2025-11-05 05:46:43.598 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 20.269s 2025-11-05 05:46:43.599 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node4 20.269s 2025-11-05 05:46:43.599 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 20.270s 2025-11-05 05:46:43.600 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 20.270s 2025-11-05 05:46:43.600 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 20.271s 2025-11-05 05:46:43.601 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 20.273s 2025-11-05 05:46:43.603 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node1 20.274s 2025-11-05 05:46:43.604 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 20.274s 2025-11-05 05:46:43.604 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 20.274s 2025-11-05 05:46:43.604 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 20.274s 2025-11-05 05:46:43.604 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node2 20.274s 2025-11-05 05:46:43.604 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 20.275s 2025-11-05 05:46:43.605 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 20.275s 2025-11-05 05:46:43.605 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 20.275s 2025-11-05 05:46:43.605 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 20.278s 2025-11-05 05:46:43.608 118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 20.281s 2025-11-05 05:46:43.611 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 20.281s 2025-11-05 05:46:43.611 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 21.145s 2025-11-05 05:46:44.475 128 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 1.5 s in CHECKING. Now in ACTIVE
node0 37.828s 2025-11-05 05:47:01.158 491 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 39 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 37.852s 2025-11-05 05:47:01.182 496 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 39 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 37.916s 2025-11-05 05:47:01.246 492 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 39 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 37.950s 2025-11-05 05:47:01.280 496 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 39 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 37.970s 2025-11-05 05:47:01.300 492 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 39 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 38.150s 2025-11-05 05:47:01.480 499 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 39 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/39
node2 38.151s 2025-11-05 05:47:01.481 500 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node4 38.156s 2025-11-05 05:47:01.486 495 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 39 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/39
node4 38.157s 2025-11-05 05:47:01.487 496 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node0 38.186s 2025-11-05 05:47:01.516 494 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 39 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/39
node3 38.186s 2025-11-05 05:47:01.516 499 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 39 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/39
node0 38.187s 2025-11-05 05:47:01.517 495 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node3 38.187s 2025-11-05 05:47:01.517 500 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node1 38.220s 2025-11-05 05:47:01.550 495 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 39 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/39
node1 38.221s 2025-11-05 05:47:01.551 496 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node2 38.234s 2025-11-05 05:47:01.564 539 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node4 38.235s 2025-11-05 05:47:01.565 527 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node2 38.236s 2025-11-05 05:47:01.566 540 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 39 Timestamp: 2025-11-05T05:47:00.307087Z Next consensus number: 1453 Legacy running event hash: 5f67169c2a32e6a1dfc38c74cc5acd30c45177e8e814168a57efda41895383d4c91ac32e32df40c76efb43ca82089781 Legacy running event mnemonic: undo-squeeze-edge-cloud Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2036993157 Root hash: 5df4e710a6d4eb58b064bb19f9e57919d8420700e1d76075b5a66b1f05d35206986e558579d3401d059c442a81abe38d (root) VirtualMap state / salon-bulk-cereal-never
node4 38.237s 2025-11-05 05:47:01.567 528 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 39 Timestamp: 2025-11-05T05:47:00.307087Z Next consensus number: 1453 Legacy running event hash: 5f67169c2a32e6a1dfc38c74cc5acd30c45177e8e814168a57efda41895383d4c91ac32e32df40c76efb43ca82089781 Legacy running event mnemonic: undo-squeeze-edge-cloud Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2036993157 Root hash: 5df4e710a6d4eb58b064bb19f9e57919d8420700e1d76075b5a66b1f05d35206986e558579d3401d059c442a81abe38d (root) VirtualMap state / salon-bulk-cereal-never
node2 38.245s 2025-11-05 05:47:01.575 541 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node4 38.245s 2025-11-05 05:47:01.575 529 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces
node2 38.246s 2025-11-05 05:47:01.576 542 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 12 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node2 38.246s 2025-11-05 05:47:01.576 543 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 38.246s 2025-11-05 05:47:01.576 530 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 12 File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces
node4 38.246s 2025-11-05 05:47:01.576 531 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 38.247s 2025-11-05 05:47:01.577 544 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 38.247s 2025-11-05 05:47:01.577 532 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 38.248s 2025-11-05 05:47:01.578 545 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 39 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/39 {"round":39,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/39/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 38.248s 2025-11-05 05:47:01.578 533 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 39 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/39 {"round":39,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/39/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 38.276s 2025-11-05 05:47:01.606 526 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node0 38.279s 2025-11-05 05:47:01.609 527 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 39 Timestamp: 2025-11-05T05:47:00.307087Z Next consensus number: 1453 Legacy running event hash: 5f67169c2a32e6a1dfc38c74cc5acd30c45177e8e814168a57efda41895383d4c91ac32e32df40c76efb43ca82089781 Legacy running event mnemonic: undo-squeeze-edge-cloud Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2036993157 Root hash: 5df4e710a6d4eb58b064bb19f9e57919d8420700e1d76075b5a66b1f05d35206986e558579d3401d059c442a81abe38d (root) VirtualMap state / salon-bulk-cereal-never
node3 38.279s 2025-11-05 05:47:01.609 531 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node3 38.281s 2025-11-05 05:47:01.611 532 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 39 Timestamp: 2025-11-05T05:47:00.307087Z Next consensus number: 1453 Legacy running event hash: 5f67169c2a32e6a1dfc38c74cc5acd30c45177e8e814168a57efda41895383d4c91ac32e32df40c76efb43ca82089781 Legacy running event mnemonic: undo-squeeze-edge-cloud Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2036993157 Root hash: 5df4e710a6d4eb58b064bb19f9e57919d8420700e1d76075b5a66b1f05d35206986e558579d3401d059c442a81abe38d (root) VirtualMap state / salon-bulk-cereal-never
node0 38.286s 2025-11-05 05:47:01.616 528 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 38.287s 2025-11-05 05:47:01.617 529 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 12 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 38.287s 2025-11-05 05:47:01.617 530 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 38.289s 2025-11-05 05:47:01.619 531 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 38.289s 2025-11-05 05:47:01.619 532 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 39 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/39 {"round":39,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/39/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 38.293s 2025-11-05 05:47:01.623 533 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 38.294s 2025-11-05 05:47:01.624 534 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 12 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 38.295s 2025-11-05 05:47:01.625 535 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 38.296s 2025-11-05 05:47:01.626 536 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 38.297s 2025-11-05 05:47:01.627 537 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 39 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/39 {"round":39,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/39/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 38.302s 2025-11-05 05:47:01.632 527 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 39
node1 38.304s 2025-11-05 05:47:01.634 528 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 39 Timestamp: 2025-11-05T05:47:00.307087Z Next consensus number: 1453 Legacy running event hash: 5f67169c2a32e6a1dfc38c74cc5acd30c45177e8e814168a57efda41895383d4c91ac32e32df40c76efb43ca82089781 Legacy running event mnemonic: undo-squeeze-edge-cloud Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2036993157 Root hash: 5df4e710a6d4eb58b064bb19f9e57919d8420700e1d76075b5a66b1f05d35206986e558579d3401d059c442a81abe38d (root) VirtualMap state / salon-bulk-cereal-never
node1 38.312s 2025-11-05 05:47:01.642 529 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 38.313s 2025-11-05 05:47:01.643 530 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 12 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 38.313s 2025-11-05 05:47:01.643 531 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 38.315s 2025-11-05 05:47:01.645 532 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 38.315s 2025-11-05 05:47:01.645 533 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 39 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/39 {"round":39,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/39/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 38.068s 2025-11-05 05:48:01.398 1975 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 169 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 38.101s 2025-11-05 05:48:01.431 1956 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 169 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 1m 38.215s 2025-11-05 05:48:01.545 1974 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 169 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 1m 38.215s 2025-11-05 05:48:01.545 1944 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 169 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 1m 38.263s 2025-11-05 05:48:01.593 1962 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 169 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 1m 38.264s 2025-11-05 05:48:01.594 1977 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 169 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/169
node1 1m 38.264s 2025-11-05 05:48:01.594 1978 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node2 1m 38.327s 2025-11-05 05:48:01.657 1965 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 169 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/169
node2 1m 38.328s 2025-11-05 05:48:01.658 1966 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node1 1m 38.349s 2025-11-05 05:48:01.679 2009 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node1 1m 38.351s 2025-11-05 05:48:01.681 2010 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 169 Timestamp: 2025-11-05T05:48:00.109755094Z Next consensus number: 6249 Legacy running event hash: e61840b9dad8dda43789a40e2a9fefff2f9a945e4ee08a578c35d8085d8f9b5ceb713905639083c14078b431ce81d58f Legacy running event mnemonic: dial-blame-pink-entire Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1889623320 Root hash: c3184df115ba100ea14d8ec18cb0c268e53524dbb833db100bfde56172e60a570e812cbb5f441a0cc354459d958abf36 (root) VirtualMap state / other-sail-resemble-achieve
node1 1m 38.358s 2025-11-05 05:48:01.688 2011 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 38.358s 2025-11-05 05:48:01.688 2012 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 140 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 38.358s 2025-11-05 05:48:01.688 2013 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 1m 38.362s 2025-11-05 05:48:01.692 2014 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 1m 38.363s 2025-11-05 05:48:01.693 2015 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 169 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/169 {"round":169,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/169/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 1m 38.410s 2025-11-05 05:48:01.740 1947 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 169 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/169
node4 1m 38.411s 2025-11-05 05:48:01.741 1948 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node2 1m 38.417s 2025-11-05 05:48:01.747 2005 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node2 1m 38.420s 2025-11-05 05:48:01.750 2006 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 169 Timestamp: 2025-11-05T05:48:00.109755094Z Next consensus number: 6249 Legacy running event hash: e61840b9dad8dda43789a40e2a9fefff2f9a945e4ee08a578c35d8085d8f9b5ceb713905639083c14078b431ce81d58f Legacy running event mnemonic: dial-blame-pink-entire Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1889623320 Root hash: c3184df115ba100ea14d8ec18cb0c268e53524dbb833db100bfde56172e60a570e812cbb5f441a0cc354459d958abf36 (root) VirtualMap state / other-sail-resemble-achieve
node2 1m 38.429s 2025-11-05 05:48:01.759 2007 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 38.429s 2025-11-05 05:48:01.759 2008 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 140 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 38.430s 2025-11-05 05:48:01.760 2009 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 1m 38.431s 2025-11-05 05:48:01.761 1978 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 169 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/169
node0 1m 38.432s 2025-11-05 05:48:01.762 1979 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node2 1m 38.435s 2025-11-05 05:48:01.765 2010 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 1m 38.435s 2025-11-05 05:48:01.765 2011 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 169 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/169 {"round":169,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/169/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 1m 38.464s 2025-11-05 05:48:01.794 1959 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 169 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/169
node3 1m 38.466s 2025-11-05 05:48:01.796 1960 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node4 1m 38.496s 2025-11-05 05:48:01.826 1979 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node4 1m 38.498s 2025-11-05 05:48:01.828 1980 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 169 Timestamp: 2025-11-05T05:48:00.109755094Z Next consensus number: 6249 Legacy running event hash: e61840b9dad8dda43789a40e2a9fefff2f9a945e4ee08a578c35d8085d8f9b5ceb713905639083c14078b431ce81d58f Legacy running event mnemonic: dial-blame-pink-entire Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1889623320 Root hash: c3184df115ba100ea14d8ec18cb0c268e53524dbb833db100bfde56172e60a570e812cbb5f441a0cc354459d958abf36 (root) VirtualMap state / other-sail-resemble-achieve
node4 1m 38.506s 2025-11-05 05:48:01.836 1981 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 38.506s 2025-11-05 05:48:01.836 1982 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 140 File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 38.506s 2025-11-05 05:48:01.836 1983 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 1m 38.511s 2025-11-05 05:48:01.841 1984 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 1m 38.511s 2025-11-05 05:48:01.841 1985 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 169 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/169 {"round":169,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/169/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 38.525s 2025-11-05 05:48:01.855 2013 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node0 1m 38.527s 2025-11-05 05:48:01.857 2014 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 169 Timestamp: 2025-11-05T05:48:00.109755094Z Next consensus number: 6249 Legacy running event hash: e61840b9dad8dda43789a40e2a9fefff2f9a945e4ee08a578c35d8085d8f9b5ceb713905639083c14078b431ce81d58f Legacy running event mnemonic: dial-blame-pink-entire Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1889623320 Root hash: c3184df115ba100ea14d8ec18cb0c268e53524dbb833db100bfde56172e60a570e812cbb5f441a0cc354459d958abf36 (root) VirtualMap state / other-sail-resemble-achieve
node0 1m 38.535s 2025-11-05 05:48:01.865 2015 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 38.535s 2025-11-05 05:48:01.865 2016 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 140 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 38.535s 2025-11-05 05:48:01.865 2017 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 1m 38.540s 2025-11-05 05:48:01.870 2018 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 1m 38.540s 2025-11-05 05:48:01.870 2019 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 169 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/169 {"round":169,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/169/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 1m 38.558s 2025-11-05 05:48:01.888 1994 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 169
node3 1m 38.561s 2025-11-05 05:48:01.891 1995 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 169 Timestamp: 2025-11-05T05:48:00.109755094Z Next consensus number: 6249 Legacy running event hash: e61840b9dad8dda43789a40e2a9fefff2f9a945e4ee08a578c35d8085d8f9b5ceb713905639083c14078b431ce81d58f Legacy running event mnemonic: dial-blame-pink-entire Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1889623320 Root hash: c3184df115ba100ea14d8ec18cb0c268e53524dbb833db100bfde56172e60a570e812cbb5f441a0cc354459d958abf36 (root) VirtualMap state / other-sail-resemble-achieve
node3 1m 38.570s 2025-11-05 05:48:01.900 2006 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 38.571s 2025-11-05 05:48:01.901 2007 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 140 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 38.571s 2025-11-05 05:48:01.901 2008 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 1m 38.575s 2025-11-05 05:48:01.905 2009 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 1m 38.576s 2025-11-05 05:48:01.906 2010 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 169 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/169 {"round":169,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/169/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 2m 37.494s 2025-11-05 05:49:00.824 3407 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 298 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 2m 37.508s 2025-11-05 05:49:00.838 3425 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 298 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 2m 37.550s 2025-11-05 05:49:00.880 3389 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 298 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 2m 37.599s 2025-11-05 05:49:00.929 3375 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 298 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 2m 37.635s 2025-11-05 05:49:00.965 3428 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 298 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 2m 37.746s 2025-11-05 05:49:01.076 3410 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 298 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/298
node1 2m 37.747s 2025-11-05 05:49:01.077 3411 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node3 2m 37.766s 2025-11-05 05:49:01.096 3402 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 298 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/298
node3 2m 37.767s 2025-11-05 05:49:01.097 3403 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node0 2m 37.783s 2025-11-05 05:49:01.113 3431 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 298 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/298
node0 2m 37.784s 2025-11-05 05:49:01.114 3432 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node1 2m 37.830s 2025-11-05 05:49:01.160 3450 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node1 2m 37.832s 2025-11-05 05:49:01.162 3451 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 298 Timestamp: 2025-11-05T05:49:00.003853Z Next consensus number: 11065 Legacy running event hash: 25b2c3fda5033d557d40d4b2c9a68927ac39b6a7d7fed78e84e8807f82d39ef191a7592673a44e6b0c1ea0908297bd3d Legacy running event mnemonic: rent-lesson-garden-primary Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 33931784 Root hash: d1d3a16a03a1e1d4da9e2b281d998a805ab1df00d7637cc74a36f6fc92a1a357f369367289e6b6a8e0c8d291bf2d7266 (root) VirtualMap state / shield-spot-injury-sheriff
node2 2m 37.837s 2025-11-05 05:49:01.167 3428 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 298 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/298
node2 2m 37.838s 2025-11-05 05:49:01.168 3429 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node1 2m 37.839s 2025-11-05 05:49:01.169 3452 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 37.840s 2025-11-05 05:49:01.170 3453 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 270 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 37.840s 2025-11-05 05:49:01.170 3454 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 2m 37.848s 2025-11-05 05:49:01.178 3455 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 2m 37.848s 2025-11-05 05:49:01.178 3456 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 298 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/298 {"round":298,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/298/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 2m 37.855s 2025-11-05 05:49:01.185 3434 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node3 2m 37.857s 2025-11-05 05:49:01.187 3435 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 298 Timestamp: 2025-11-05T05:49:00.003853Z Next consensus number: 11065 Legacy running event hash: 25b2c3fda5033d557d40d4b2c9a68927ac39b6a7d7fed78e84e8807f82d39ef191a7592673a44e6b0c1ea0908297bd3d Legacy running event mnemonic: rent-lesson-garden-primary Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 33931784 Root hash: d1d3a16a03a1e1d4da9e2b281d998a805ab1df00d7637cc74a36f6fc92a1a357f369367289e6b6a8e0c8d291bf2d7266 (root) VirtualMap state / shield-spot-injury-sheriff
node3 2m 37.864s 2025-11-05 05:49:01.194 3436 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 37.864s 2025-11-05 05:49:01.194 3437 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 270 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 37.864s 2025-11-05 05:49:01.194 3438 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 2m 37.871s 2025-11-05 05:49:01.201 3463 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node3 2m 37.872s 2025-11-05 05:49:01.202 3439 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 2m 37.873s 2025-11-05 05:49:01.203 3440 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 298 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/298 {"round":298,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/298/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 2m 37.874s 2025-11-05 05:49:01.204 3464 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 298 Timestamp: 2025-11-05T05:49:00.003853Z Next consensus number: 11065 Legacy running event hash: 25b2c3fda5033d557d40d4b2c9a68927ac39b6a7d7fed78e84e8807f82d39ef191a7592673a44e6b0c1ea0908297bd3d Legacy running event mnemonic: rent-lesson-garden-primary Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 33931784 Root hash: d1d3a16a03a1e1d4da9e2b281d998a805ab1df00d7637cc74a36f6fc92a1a357f369367289e6b6a8e0c8d291bf2d7266 (root) VirtualMap state / shield-spot-injury-sheriff
node0 2m 37.881s 2025-11-05 05:49:01.211 3465 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 37.881s 2025-11-05 05:49:01.211 3466 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 270 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 37.881s 2025-11-05 05:49:01.211 3467 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 2m 37.889s 2025-11-05 05:49:01.219 3468 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 2m 37.890s 2025-11-05 05:49:01.220 3469 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 298 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/298 {"round":298,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/298/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 37.895s 2025-11-05 05:49:01.225 3378 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 298 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/298
node4 2m 37.896s 2025-11-05 05:49:01.226 3379 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node2 2m 37.924s 2025-11-05 05:49:01.254 3460 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node2 2m 37.927s 2025-11-05 05:49:01.257 3461 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 298 Timestamp: 2025-11-05T05:49:00.003853Z Next consensus number: 11065 Legacy running event hash: 25b2c3fda5033d557d40d4b2c9a68927ac39b6a7d7fed78e84e8807f82d39ef191a7592673a44e6b0c1ea0908297bd3d Legacy running event mnemonic: rent-lesson-garden-primary Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 33931784 Root hash: d1d3a16a03a1e1d4da9e2b281d998a805ab1df00d7637cc74a36f6fc92a1a357f369367289e6b6a8e0c8d291bf2d7266 (root) VirtualMap state / shield-spot-injury-sheriff
node2 2m 37.934s 2025-11-05 05:49:01.264 3462 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 37.934s 2025-11-05 05:49:01.264 3463 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 270 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 37.935s 2025-11-05 05:49:01.265 3464 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 2m 37.942s 2025-11-05 05:49:01.272 3465 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 2m 37.943s 2025-11-05 05:49:01.273 3466 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 298 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/298 {"round":298,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/298/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 37.973s 2025-11-05 05:49:01.303 3418 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 298
node4 2m 37.975s 2025-11-05 05:49:01.305 3419 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 298 Timestamp: 2025-11-05T05:49:00.003853Z Next consensus number: 11065 Legacy running event hash: 25b2c3fda5033d557d40d4b2c9a68927ac39b6a7d7fed78e84e8807f82d39ef191a7592673a44e6b0c1ea0908297bd3d Legacy running event mnemonic: rent-lesson-garden-primary Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 33931784 Root hash: d1d3a16a03a1e1d4da9e2b281d998a805ab1df00d7637cc74a36f6fc92a1a357f369367289e6b6a8e0c8d291bf2d7266 (root) VirtualMap state / shield-spot-injury-sheriff
node4 2m 37.983s 2025-11-05 05:49:01.313 3420 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 37.983s 2025-11-05 05:49:01.313 3421 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 270 File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 37.983s 2025-11-05 05:49:01.313 3422 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 2m 37.991s 2025-11-05 05:49:01.321 3423 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 2m 37.992s 2025-11-05 05:49:01.322 3424 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 298 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/298 {"round":298,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/298/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 3m 14.671s 2025-11-05 05:49:38.001 4382 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 1 to 4>> NetworkUtils: Connection broken: 1 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-05T05:49:37.999137705Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:388) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:432) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node2 3m 14.672s 2025-11-05 05:49:38.002 4452 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 2 to 4>> NetworkUtils: Connection broken: 2 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-05T05:49:37.999344256Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:388) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:432) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node0 3m 14.674s 2025-11-05 05:49:38.004 4405 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 0 to 4>> NetworkUtils: Connection broken: 0 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-05T05:49:37.999653407Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:388) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:432) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node3 3m 14.674s 2025-11-05 05:49:38.004 4366 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 3 to 4>> NetworkUtils: Connection broken: 3 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-05T05:49:38.000079039Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:388) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:432) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node2 3m 37.758s 2025-11-05 05:50:01.088 5123 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 437 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 3m 37.833s 2025-11-05 05:50:01.163 4983 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 437 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 3m 37.867s 2025-11-05 05:50:01.197 5060 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 437 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 3m 37.900s 2025-11-05 05:50:01.230 4959 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 437 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 3m 38.037s 2025-11-05 05:50:01.367 4962 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 437 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/437
node3 3m 38.038s 2025-11-05 05:50:01.368 4963 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 437
node1 3m 38.108s 2025-11-05 05:50:01.438 4986 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 437 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/437
node1 3m 38.109s 2025-11-05 05:50:01.439 4987 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 437
node3 3m 38.120s 2025-11-05 05:50:01.450 4994 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 437
node3 3m 38.122s 2025-11-05 05:50:01.452 4995 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 437 Timestamp: 2025-11-05T05:50:00.259215837Z Next consensus number: 15355 Legacy running event hash: 0d558c62bd4978d60dc6f140bad97756968beb7ecc1288d931da86afc952cf91fad3bab6b38c670c54fec6fc1da89975 Legacy running event mnemonic: primary-inherit-finger-bracket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -38264314 Root hash: 49bdfe3150105301236c26c07d7c13aa98eb108422a4b0a144a156720e472beb64d42e1ee8506ab6bf2b939e41072d61 (root) VirtualMap state / dash-delay-acoustic-gasp
node3 3m 38.129s 2025-11-05 05:50:01.459 4996 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 38.129s 2025-11-05 05:50:01.459 4997 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 410 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 38.129s 2025-11-05 05:50:01.459 4998 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 3m 38.140s 2025-11-05 05:50:01.470 4999 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 3m 38.140s 2025-11-05 05:50:01.470 5000 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 437 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/437 {"round":437,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/437/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 3m 38.152s 2025-11-05 05:50:01.482 5136 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 437 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/437
node2 3m 38.153s 2025-11-05 05:50:01.483 5137 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 437
node1 3m 38.185s 2025-11-05 05:50:01.515 5018 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 437
node1 3m 38.186s 2025-11-05 05:50:01.516 5019 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 437 Timestamp: 2025-11-05T05:50:00.259215837Z Next consensus number: 15355 Legacy running event hash: 0d558c62bd4978d60dc6f140bad97756968beb7ecc1288d931da86afc952cf91fad3bab6b38c670c54fec6fc1da89975 Legacy running event mnemonic: primary-inherit-finger-bracket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -38264314 Root hash: 49bdfe3150105301236c26c07d7c13aa98eb108422a4b0a144a156720e472beb64d42e1ee8506ab6bf2b939e41072d61 (root) VirtualMap state / dash-delay-acoustic-gasp
node1 3m 38.192s 2025-11-05 05:50:01.522 5020 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 38.192s 2025-11-05 05:50:01.522 5021 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 410 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 38.193s 2025-11-05 05:50:01.523 5022 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 3m 38.203s 2025-11-05 05:50:01.533 5023 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 3m 38.203s 2025-11-05 05:50:01.533 5024 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 437 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/437 {"round":437,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/437/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 3m 38.226s 2025-11-05 05:50:01.556 5073 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 437 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/437
node0 3m 38.226s 2025-11-05 05:50:01.556 5074 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 437
node2 3m 38.233s 2025-11-05 05:50:01.563 5181 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 437
node2 3m 38.234s 2025-11-05 05:50:01.564 5182 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 437 Timestamp: 2025-11-05T05:50:00.259215837Z Next consensus number: 15355 Legacy running event hash: 0d558c62bd4978d60dc6f140bad97756968beb7ecc1288d931da86afc952cf91fad3bab6b38c670c54fec6fc1da89975 Legacy running event mnemonic: primary-inherit-finger-bracket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -38264314 Root hash: 49bdfe3150105301236c26c07d7c13aa98eb108422a4b0a144a156720e472beb64d42e1ee8506ab6bf2b939e41072d61 (root) VirtualMap state / dash-delay-acoustic-gasp
node2 3m 38.242s 2025-11-05 05:50:01.572 5183 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 38.243s 2025-11-05 05:50:01.573 5184 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 410 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 38.243s 2025-11-05 05:50:01.573 5185 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 3m 38.254s 2025-11-05 05:50:01.584 5186 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 3m 38.254s 2025-11-05 05:50:01.584 5187 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 437 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/437 {"round":437,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/437/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 3m 38.309s 2025-11-05 05:50:01.639 5108 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 437
node0 3m 38.311s 2025-11-05 05:50:01.641 5109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 437 Timestamp: 2025-11-05T05:50:00.259215837Z Next consensus number: 15355 Legacy running event hash: 0d558c62bd4978d60dc6f140bad97756968beb7ecc1288d931da86afc952cf91fad3bab6b38c670c54fec6fc1da89975 Legacy running event mnemonic: primary-inherit-finger-bracket Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -38264314 Root hash: 49bdfe3150105301236c26c07d7c13aa98eb108422a4b0a144a156720e472beb64d42e1ee8506ab6bf2b939e41072d61 (root) VirtualMap state / dash-delay-acoustic-gasp
node0 3m 38.318s 2025-11-05 05:50:01.648 5110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 38.318s 2025-11-05 05:50:01.648 5111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 410 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 38.318s 2025-11-05 05:50:01.648 5112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 3m 38.329s 2025-11-05 05:50:01.659 5113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 3m 38.330s 2025-11-05 05:50:01.660 5114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 437 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/437 {"round":437,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/437/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 4m 37.636s 2025-11-05 05:51:00.966 6750 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 575 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 4m 37.650s 2025-11-05 05:51:00.980 6541 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 575 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 4m 37.678s 2025-11-05 05:51:01.008 6877 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 575 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 4m 37.724s 2025-11-05 05:51:01.054 6595 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 575 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 4m 37.848s 2025-11-05 05:51:01.178 6598 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 575 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/575
node1 4m 37.849s 2025-11-05 05:51:01.179 6599 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 575
node0 4m 37.908s 2025-11-05 05:51:01.238 6753 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 575 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/575
node0 4m 37.908s 2025-11-05 05:51:01.238 6754 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 575
node2 4m 37.918s 2025-11-05 05:51:01.248 6880 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 575 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/575
node2 4m 37.919s 2025-11-05 05:51:01.249 6881 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 575
node1 4m 37.926s 2025-11-05 05:51:01.256 6630 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 575
node1 4m 37.928s 2025-11-05 05:51:01.258 6631 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 575 Timestamp: 2025-11-05T05:51:00.090855222Z Next consensus number: 18635 Legacy running event hash: a7a08fcf165ab52ab96b8f2b7d78595085a3523a2a1b33c4d7dc5cf609cfbc93a3cdc962d7b345f385421d5cdd3fff1f Legacy running event mnemonic: army-alert-slender-food Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2141878182 Root hash: 4f9c8e3ec97627679dd02a62d6eb3b4e2ba1baa13eafd9fc0ebb709c852fe64c8766ac8ab342e369aeaf7bc4b9f4ebf6 (root) VirtualMap state / vicious-debate-spend-undo
node1 4m 37.935s 2025-11-05 05:51:01.265 6632 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+50+29.003908476Z_seq1_minr474_maxr5474_orgn0.pces
node1 4m 37.935s 2025-11-05 05:51:01.265 6633 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 548 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+50+29.003908476Z_seq1_minr474_maxr5474_orgn0.pces
node1 4m 37.935s 2025-11-05 05:51:01.265 6634 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 4m 37.937s 2025-11-05 05:51:01.267 6635 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 4m 37.937s 2025-11-05 05:51:01.267 6636 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 575 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/575 {"round":575,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/575/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 4m 37.939s 2025-11-05 05:51:01.269 6637 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2
node3 4m 37.958s 2025-11-05 05:51:01.288 6544 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 575 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/575
node3 4m 37.959s 2025-11-05 05:51:01.289 6545 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 575
node0 4m 37.989s 2025-11-05 05:51:01.319 6793 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 575
node0 4m 37.991s 2025-11-05 05:51:01.321 6794 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 575 Timestamp: 2025-11-05T05:51:00.090855222Z Next consensus number: 18635 Legacy running event hash: a7a08fcf165ab52ab96b8f2b7d78595085a3523a2a1b33c4d7dc5cf609cfbc93a3cdc962d7b345f385421d5cdd3fff1f Legacy running event mnemonic: army-alert-slender-food Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2141878182 Root hash: 4f9c8e3ec97627679dd02a62d6eb3b4e2ba1baa13eafd9fc0ebb709c852fe64c8766ac8ab342e369aeaf7bc4b9f4ebf6 (root) VirtualMap state / vicious-debate-spend-undo
node0 4m 37.997s 2025-11-05 05:51:01.327 6795 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+50+29.022108715Z_seq1_minr474_maxr5474_orgn0.pces
node0 4m 37.997s 2025-11-05 05:51:01.327 6796 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 548 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+50+29.022108715Z_seq1_minr474_maxr5474_orgn0.pces
node0 4m 37.997s 2025-11-05 05:51:01.327 6797 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 4m 37.999s 2025-11-05 05:51:01.329 6798 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 4m 37.999s 2025-11-05 05:51:01.329 6799 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 575 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/575 {"round":575,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/575/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 4m 38.001s 2025-11-05 05:51:01.331 6800 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2
node2 4m 38.008s 2025-11-05 05:51:01.338 6920 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 575
node2 4m 38.009s 2025-11-05 05:51:01.339 6921 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 575 Timestamp: 2025-11-05T05:51:00.090855222Z Next consensus number: 18635 Legacy running event hash: a7a08fcf165ab52ab96b8f2b7d78595085a3523a2a1b33c4d7dc5cf609cfbc93a3cdc962d7b345f385421d5cdd3fff1f Legacy running event mnemonic: army-alert-slender-food Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2141878182 Root hash: 4f9c8e3ec97627679dd02a62d6eb3b4e2ba1baa13eafd9fc0ebb709c852fe64c8766ac8ab342e369aeaf7bc4b9f4ebf6 (root) VirtualMap state / vicious-debate-spend-undo
node2 4m 38.019s 2025-11-05 05:51:01.349 6922 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+50+28.937540264Z_seq1_minr474_maxr5474_orgn0.pces
node2 4m 38.019s 2025-11-05 05:51:01.349 6923 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 548 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+50+28.937540264Z_seq1_minr474_maxr5474_orgn0.pces
node2 4m 38.019s 2025-11-05 05:51:01.349 6924 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 4m 38.020s 2025-11-05 05:51:01.350 6925 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 4m 38.021s 2025-11-05 05:51:01.351 6926 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 575 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/575 {"round":575,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/575/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 4m 38.023s 2025-11-05 05:51:01.353 6927 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2
node3 4m 38.039s 2025-11-05 05:51:01.369 6587 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 575
node3 4m 38.042s 2025-11-05 05:51:01.372 6588 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 575 Timestamp: 2025-11-05T05:51:00.090855222Z Next consensus number: 18635 Legacy running event hash: a7a08fcf165ab52ab96b8f2b7d78595085a3523a2a1b33c4d7dc5cf609cfbc93a3cdc962d7b345f385421d5cdd3fff1f Legacy running event mnemonic: army-alert-slender-food Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2141878182 Root hash: 4f9c8e3ec97627679dd02a62d6eb3b4e2ba1baa13eafd9fc0ebb709c852fe64c8766ac8ab342e369aeaf7bc4b9f4ebf6 (root) VirtualMap state / vicious-debate-spend-undo
node3 4m 38.047s 2025-11-05 05:51:01.377 6589 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+50+29.071967607Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 4m 38.047s 2025-11-05 05:51:01.377 6590 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 548 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+50+29.071967607Z_seq1_minr474_maxr5474_orgn0.pces
node3 4m 38.047s 2025-11-05 05:51:01.377 6591 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 4m 38.049s 2025-11-05 05:51:01.379 6592 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 4m 38.049s 2025-11-05 05:51:01.379 6593 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 575 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/575 {"round":575,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/575/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 4m 38.051s 2025-11-05 05:51:01.381 6594 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2
node0 5m 37.707s 2025-11-05 05:52:01.037 8458 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 722 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 5m 37.714s 2025-11-05 05:52:01.044 8329 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 722 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 5m 37.788s 2025-11-05 05:52:01.118 8621 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 722 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 5m 37.790s 2025-11-05 05:52:01.120 8263 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 722 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 5m 37.907s 2025-11-05 05:52:01.237 8266 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 722 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/722
node3 5m 37.907s 2025-11-05 05:52:01.237 8267 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 722
node2 5m 37.941s 2025-11-05 05:52:01.271 8624 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 722 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/722
node2 5m 37.942s 2025-11-05 05:52:01.272 8625 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 722
node3 5m 37.994s 2025-11-05 05:52:01.324 8306 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 722
node3 5m 37.996s 2025-11-05 05:52:01.326 8307 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 722 Timestamp: 2025-11-05T05:52:00.239129242Z Next consensus number: 21922 Legacy running event hash: f3711d42bd46d115a846f899a537e197e281c70bb4cf91afe0c09f4de6695a443465271dbb8a2edbb3ce5d651ba2bea3 Legacy running event mnemonic: music-portion-firm-talent Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -844858512 Root hash: a8e18140cdfcb721d4ace648c00287ff03511bfd0790ed479834e948096887fbb17a1d8048bd7e34e5407bc6389d0249 (root) VirtualMap state / token-submit-dog-brief
node3 5m 38.003s 2025-11-05 05:52:01.333 8308 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+50+29.071967607Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 5m 38.004s 2025-11-05 05:52:01.334 8309 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 695 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+50+29.071967607Z_seq1_minr474_maxr5474_orgn0.pces
node3 5m 38.004s 2025-11-05 05:52:01.334 8310 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 5m 38.008s 2025-11-05 05:52:01.338 8311 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 5m 38.009s 2025-11-05 05:52:01.339 8312 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 722 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/722 {"round":722,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/722/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 5m 38.010s 2025-11-05 05:52:01.340 8313 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/39
node0 5m 38.012s 2025-11-05 05:52:01.342 8471 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 722 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/722
node0 5m 38.013s 2025-11-05 05:52:01.343 8472 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 722
node2 5m 38.019s 2025-11-05 05:52:01.349 8664 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 722
node2 5m 38.021s 2025-11-05 05:52:01.351 8665 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 722 Timestamp: 2025-11-05T05:52:00.239129242Z Next consensus number: 21922 Legacy running event hash: f3711d42bd46d115a846f899a537e197e281c70bb4cf91afe0c09f4de6695a443465271dbb8a2edbb3ce5d651ba2bea3 Legacy running event mnemonic: music-portion-firm-talent Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -844858512 Root hash: a8e18140cdfcb721d4ace648c00287ff03511bfd0790ed479834e948096887fbb17a1d8048bd7e34e5407bc6389d0249 (root) VirtualMap state / token-submit-dog-brief
node2 5m 38.029s 2025-11-05 05:52:01.359 8666 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+50+28.937540264Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 38.030s 2025-11-05 05:52:01.360 8667 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 695 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+50+28.937540264Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 38.030s 2025-11-05 05:52:01.360 8668 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 5m 38.034s 2025-11-05 05:52:01.364 8669 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 5m 38.035s 2025-11-05 05:52:01.365 8670 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 722 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/722 {"round":722,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/722/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 5m 38.036s 2025-11-05 05:52:01.366 8671 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/39
node1 5m 38.079s 2025-11-05 05:52:01.409 8342 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 722 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/722
node1 5m 38.080s 2025-11-05 05:52:01.410 8343 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 722
node0 5m 38.097s 2025-11-05 05:52:01.427 8506 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 722
node0 5m 38.099s 2025-11-05 05:52:01.429 8507 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 722 Timestamp: 2025-11-05T05:52:00.239129242Z Next consensus number: 21922 Legacy running event hash: f3711d42bd46d115a846f899a537e197e281c70bb4cf91afe0c09f4de6695a443465271dbb8a2edbb3ce5d651ba2bea3 Legacy running event mnemonic: music-portion-firm-talent Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -844858512 Root hash: a8e18140cdfcb721d4ace648c00287ff03511bfd0790ed479834e948096887fbb17a1d8048bd7e34e5407bc6389d0249 (root) VirtualMap state / token-submit-dog-brief
node0 5m 38.107s 2025-11-05 05:52:01.437 8508 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+50+29.022108715Z_seq1_minr474_maxr5474_orgn0.pces
node0 5m 38.108s 2025-11-05 05:52:01.438 8509 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 695 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+50+29.022108715Z_seq1_minr474_maxr5474_orgn0.pces
node0 5m 38.108s 2025-11-05 05:52:01.438 8510 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 5m 38.112s 2025-11-05 05:52:01.442 8511 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 5m 38.112s 2025-11-05 05:52:01.442 8512 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 722 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/722 {"round":722,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/722/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 5m 38.114s 2025-11-05 05:52:01.444 8513 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/39
node1 5m 38.160s 2025-11-05 05:52:01.490 8377 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 722
node1 5m 38.162s 2025-11-05 05:52:01.492 8378 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 722 Timestamp: 2025-11-05T05:52:00.239129242Z Next consensus number: 21922 Legacy running event hash: f3711d42bd46d115a846f899a537e197e281c70bb4cf91afe0c09f4de6695a443465271dbb8a2edbb3ce5d651ba2bea3 Legacy running event mnemonic: music-portion-firm-talent Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -844858512 Root hash: a8e18140cdfcb721d4ace648c00287ff03511bfd0790ed479834e948096887fbb17a1d8048bd7e34e5407bc6389d0249 (root) VirtualMap state / token-submit-dog-brief
node1 5m 38.169s 2025-11-05 05:52:01.499 8379 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+50+29.003908476Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 38.170s 2025-11-05 05:52:01.500 8380 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 695 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+50+29.003908476Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 38.170s 2025-11-05 05:52:01.500 8381 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 5m 38.174s 2025-11-05 05:52:01.504 8382 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 5m 38.174s 2025-11-05 05:52:01.504 8383 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 722 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/722 {"round":722,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/722/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 5m 38.176s 2025-11-05 05:52:01.506 8384 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/39
node4 5m 55.377s 2025-11-05 05:52:18.707 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 5m 55.463s 2025-11-05 05:52:18.793 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 5m 55.478s 2025-11-05 05:52:18.808 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 55.586s 2025-11-05 05:52:18.916 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 5m 55.615s 2025-11-05 05:52:18.945 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 5m 56.782s 2025-11-05 05:52:20.112 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1167ms
node4 5m 56.791s 2025-11-05 05:52:20.121 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 5m 56.794s 2025-11-05 05:52:20.124 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 56.830s 2025-11-05 05:52:20.160 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 5m 56.891s 2025-11-05 05:52:20.221 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 5m 56.892s 2025-11-05 05:52:20.222 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 5m 58.915s 2025-11-05 05:52:22.245 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 5m 59.002s 2025-11-05 05:52:22.332 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 59.008s 2025-11-05 05:52:22.338 16 INFO STARTUP <main> StartupStateUtils: The following saved states were found on disk:
- /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/298/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/169/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/39/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2/SignedState.swh
node4 5m 59.008s 2025-11-05 05:52:22.338 17 INFO STARTUP <main> StartupStateUtils: Loading latest state from disk.
node4 5m 59.009s 2025-11-05 05:52:22.339 18 INFO STARTUP <main> StartupStateUtils: Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/298/SignedState.swh
node4 5m 59.017s 2025-11-05 05:52:22.347 19 INFO STATE_TO_DISK <main> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp
node4 5m 59.144s 2025-11-05 05:52:22.474 29 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 5m 59.909s 2025-11-05 05:52:23.239 31 INFO STARTUP <main> StartupStateUtils: Loaded state's hash is the same as when it was saved.
node4 5m 59.914s 2025-11-05 05:52:23.244 32 INFO STARTUP <main> StartupStateUtils: Platform has loaded a saved state {"round":298,"consensusTimestamp":"2025-11-05T05:49:00.003853Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload]
node4 5m 59.920s 2025-11-05 05:52:23.250 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 59.921s 2025-11-05 05:52:23.251 38 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 5m 59.926s 2025-11-05 05:52:23.256 39 INFO STARTUP <main> AddressBookInitializer: Using the loaded state's address book and weight values.
node4 5m 59.934s 2025-11-05 05:52:23.264 40 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 59.937s 2025-11-05 05:52:23.267 41 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 6m 1.014s 2025-11-05 05:52:24.344 42 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26285388] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=147470, randomLong=4793652961190756739, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=74590, randomLong=8357312465388111392, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1096830, data=35, exception=null] OS Health Check Report - Complete (took 1022 ms)
node4 6m 1.046s 2025-11-05 05:52:24.376 43 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 6m 1.176s 2025-11-05 05:52:24.506 44 INFO STARTUP <main> PcesUtilities: Span compaction completed for data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 383
node4 6m 1.179s 2025-11-05 05:52:24.509 45 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 6m 1.180s 2025-11-05 05:52:24.510 46 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 6m 1.262s 2025-11-05 05:52:24.592 47 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "I7jaQQ==", "port": 30124 }, { "ipAddressV4": "CoAAew==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ij8KNA==", "port": 30125 }, { "ipAddressV4": "CoAAfQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "Iixvyw==", "port": 30126 }, { "ipAddressV4": "CoAAfA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "iHcLGg==", "port": 30127 }, { "ipAddressV4": "CoAAfg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "IodvrA==", "port": 30128 }, { "ipAddressV4": "CoAAeg==", "port": 30128 }] }] }
node4 6m 1.287s 2025-11-05 05:52:24.617 48 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with state long 6564806469424552941.
node4 6m 1.288s 2025-11-05 05:52:24.618 49 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with 297 rounds handled.
node4 6m 1.288s 2025-11-05 05:52:24.618 50 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6m 1.289s 2025-11-05 05:52:24.619 51 INFO STARTUP <main> TransactionHandlingHistory: Log file found. Parsing previous history
node4 6m 1.327s 2025-11-05 05:52:24.657 52 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 298 Timestamp: 2025-11-05T05:49:00.003853Z Next consensus number: 11065 Legacy running event hash: 25b2c3fda5033d557d40d4b2c9a68927ac39b6a7d7fed78e84e8807f82d39ef191a7592673a44e6b0c1ea0908297bd3d Legacy running event mnemonic: rent-lesson-garden-primary Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 33931784 Root hash: d1d3a16a03a1e1d4da9e2b281d998a805ab1df00d7637cc74a36f6fc92a1a357f369367289e6b6a8e0c8d291bf2d7266 (root) VirtualMap state / shield-spot-injury-sheriff
node4 6m 1.332s 2025-11-05 05:52:24.662 54 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node4 6m 1.518s 2025-11-05 05:52:24.848 55 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 25b2c3fda5033d557d40d4b2c9a68927ac39b6a7d7fed78e84e8807f82d39ef191a7592673a44e6b0c1ea0908297bd3d
node4 6m 1.527s 2025-11-05 05:52:24.857 56 INFO STARTUP <platformForkJoinThread-4> Shadowgraph: Shadowgraph starting from expiration threshold 270
node4 6m 1.532s 2025-11-05 05:52:24.862 58 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 6m 1.533s 2025-11-05 05:52:24.863 59 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 6m 1.534s 2025-11-05 05:52:24.864 60 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 6m 1.537s 2025-11-05 05:52:24.867 61 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 6m 1.538s 2025-11-05 05:52:24.868 62 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 6m 1.539s 2025-11-05 05:52:24.869 63 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 6m 1.540s 2025-11-05 05:52:24.870 64 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 270
node4 6m 1.546s 2025-11-05 05:52:24.876 65 INFO PLATFORM_STATUS <platformForkJoinThread-7> StatusStateMachine: Platform spent 158.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 6m 1.807s 2025-11-05 05:52:25.137 66 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:4 H:a9953073ef3f BR:296), num remaining: 4
node4 6m 1.809s 2025-11-05 05:52:25.139 67 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:a9b6ed6eceed BR:296), num remaining: 3
node4 6m 1.809s 2025-11-05 05:52:25.139 68 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:0b69e5952e52 BR:296), num remaining: 2
node4 6m 1.810s 2025-11-05 05:52:25.140 69 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:18ae63a5d5f5 BR:296), num remaining: 1
node4 6m 1.810s 2025-11-05 05:52:25.140 70 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:8d88165c1e83 BR:296), num remaining: 0
node4 6m 2.419s 2025-11-05 05:52:25.749 769 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 4,011 preconsensus events with max birth round 383. These events contained 5,540 transactions. 84 rounds reached consensus spanning 36.4 seconds of consensus time. The latest round to reach consensus is round 382. Replay took 877.0 milliseconds.
node4 6m 2.421s 2025-11-05 05:52:25.751 770 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 6m 2.423s 2025-11-05 05:52:25.753 771 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 876.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 6m 3.314s 2025-11-05 05:52:26.644 842 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Preparing for reconnect, stopping gossip
node4 6m 3.314s 2025-11-05 05:52:26.644 844 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=382,ancientThreshold=355,expiredThreshold=281] remote ev=EventWindow[latestConsensusRound=780,ancientThreshold=753,expiredThreshold=679]
node4 6m 3.314s 2025-11-05 05:52:26.644 846 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=382,ancientThreshold=355,expiredThreshold=281] remote ev=EventWindow[latestConsensusRound=781,ancientThreshold=754,expiredThreshold=680]
node4 6m 3.314s 2025-11-05 05:52:26.644 845 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=382,ancientThreshold=355,expiredThreshold=281] remote ev=EventWindow[latestConsensusRound=781,ancientThreshold=754,expiredThreshold=680]
node4 6m 3.314s 2025-11-05 05:52:26.644 843 INFO RECONNECT <<platform-core: SyncProtocolWith3 4 to 3>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=382,ancientThreshold=355,expiredThreshold=281] remote ev=EventWindow[latestConsensusRound=780,ancientThreshold=753,expiredThreshold=679]
node4 6m 3.314s 2025-11-05 05:52:26.644 847 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 890.0 ms in OBSERVING. Now in BEHIND
node4 6m 3.315s 2025-11-05 05:52:26.645 848 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Preparing for reconnect, start clearing queues
node1 6m 3.384s 2025-11-05 05:52:26.714 9037 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=781,ancientThreshold=754,expiredThreshold=680] remote ev=EventWindow[latestConsensusRound=382,ancientThreshold=355,expiredThreshold=281]
node0 6m 3.385s 2025-11-05 05:52:26.715 9200 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=781,ancientThreshold=754,expiredThreshold=680] remote ev=EventWindow[latestConsensusRound=382,ancientThreshold=355,expiredThreshold=281]
node2 6m 3.385s 2025-11-05 05:52:26.715 9303 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=780,ancientThreshold=753,expiredThreshold=679] remote ev=EventWindow[latestConsensusRound=382,ancientThreshold=355,expiredThreshold=281]
node3 6m 3.385s 2025-11-05 05:52:26.715 8957 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=780,ancientThreshold=753,expiredThreshold=679] remote ev=EventWindow[latestConsensusRound=382,ancientThreshold=355,expiredThreshold=281]
node4 6m 3.467s 2025-11-05 05:52:26.797 849 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Queues have been cleared
node4 6m 3.468s 2025-11-05 05:52:26.798 850 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Waiting for a state to be obtained from a peer
node1 6m 3.560s 2025-11-05 05:52:26.890 9038 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> ReconnectStateTeacher: Starting reconnect in the role of the sender {"receiving":false,"nodeId":1,"otherNodeId":4,"round":781} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node1 6m 3.561s 2025-11-05 05:52:26.891 9039 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> ReconnectStateTeacher: The following state will be sent to the learner:
Round: 781 Timestamp: 2025-11-05T05:52:25.752538Z Next consensus number: 23329 Legacy running event hash: 262fbe194af515c78e84ed80d9d2b899968ebf4ade4b61966c0fd188c8618756e699164613a0f98d13d309353c6825db Legacy running event mnemonic: shine-diagram-gadget-rib Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1508808554 Root hash: d057ba01fd11b8e0dde33828748906e7538122a9461e1feef256de4d86d4e61d616f9d692a9cd0805967d1b0af92e425 (root) VirtualMap state / vivid-say-fuel-tomato
node1 6m 3.562s 2025-11-05 05:52:26.892 9040 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> ReconnectStateTeacher: Sending signatures from nodes 0, 1, 2 (signing weight = 37500000000/50000000000) for state hash d057ba01fd11b8e0dde33828748906e7538122a9461e1feef256de4d86d4e61d616f9d692a9cd0805967d1b0af92e425
node1 6m 3.562s 2025-11-05 05:52:26.892 9041 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> ReconnectStateTeacher: Starting synchronization in the role of the sender.
node4 6m 3.630s 2025-11-05 05:52:26.960 851 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> ReconnectStatePeerProtocol: Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":1,"round":382} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node4 6m 3.631s 2025-11-05 05:52:26.961 852 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> ReconnectStateLearner: Receiving signed state signatures
node4 6m 3.633s 2025-11-05 05:52:26.963 853 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> ReconnectStateLearner: Received signatures from nodes 0, 1, 2
node1 6m 3.682s 2025-11-05 05:52:27.012 9068 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node1 6m 3.690s 2025-11-05 05:52:27.020 9069 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@34bee436 start run()
node4 6m 3.834s 2025-11-05 05:52:27.164 880 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: learner calls receiveTree()
node4 6m 3.834s 2025-11-05 05:52:27.164 881 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: synchronizing tree
node4 6m 3.835s 2025-11-05 05:52:27.165 882 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node4 6m 3.841s 2025-11-05 05:52:27.171 883 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@65e008f4 start run()
node4 6m 3.900s 2025-11-05 05:52:27.230 884 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): firstLeafPath: 4 -> 4, lastLeafPath: 8 -> 8
node4 6m 3.901s 2025-11-05 05:52:27.231 885 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): done
node4 6m 4.061s 2025-11-05 05:52:27.391 886 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 6m 4.061s 2025-11-05 05:52:27.391 887 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call nodeRemover.allNodesReceived()
node4 6m 4.062s 2025-11-05 05:52:27.392 888 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived()
node4 6m 4.062s 2025-11-05 05:52:27.392 889 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived(): done
node4 6m 4.062s 2025-11-05 05:52:27.392 890 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call root.endLearnerReconnect()
node4 6m 4.062s 2025-11-05 05:52:27.392 891 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call reconnectIterator.close()
node4 6m 4.063s 2025-11-05 05:52:27.393 892 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call setHashPrivate()
node4 6m 4.084s 2025-11-05 05:52:27.414 902 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call postInit()
node4 6m 4.085s 2025-11-05 05:52:27.415 904 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: endLearnerReconnect() complete
node4 6m 4.085s 2025-11-05 05:52:27.415 905 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: close() complete
node4 6m 4.086s 2025-11-05 05:52:27.416 906 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 6m 4.087s 2025-11-05 05:52:27.417 907 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@65e008f4 finish run()
node4 6m 4.088s 2025-11-05 05:52:27.418 908 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: received tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node4 6m 4.089s 2025-11-05 05:52:27.419 909 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: synchronization complete
node4 6m 4.089s 2025-11-05 05:52:27.419 910 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: learner calls initialize()
node4 6m 4.089s 2025-11-05 05:52:27.419 911 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: initializing tree
node4 6m 4.089s 2025-11-05 05:52:27.419 912 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: initialization complete
node4 6m 4.089s 2025-11-05 05:52:27.419 913 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: learner calls hash()
node4 6m 4.090s 2025-11-05 05:52:27.420 914 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: hashing tree
node4 6m 4.090s 2025-11-05 05:52:27.420 915 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: hashing complete
node4 6m 4.090s 2025-11-05 05:52:27.420 916 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: learner calls logStatistics()
node4 6m 4.093s 2025-11-05 05:52:27.423 917 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: Finished synchronization {"timeInSeconds":0.255,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":9,"leafNodes":5,"redundantLeafNodes":2,"internalNodes":4,"redundantInternalNodes":0} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload]
node4 6m 4.094s 2025-11-05 05:52:27.424 918 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: ReconnectMapMetrics: transfersFromTeacher=9; transfersFromLearner=8; internalHashes=3; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=5; leafCleanHashes=2; leafData=5; leafCleanData=2
node4 6m 4.094s 2025-11-05 05:52:27.424 919 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> LearningSynchronizer: learner is done synchronizing
node4 6m 4.095s 2025-11-05 05:52:27.425 920 INFO STARTUP <<platform-core: SyncProtocolWith1 4 to 1>> ConsistencyTestingToolState: New State Constructed.
node4 6m 4.100s 2025-11-05 05:52:27.430 921 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> ReconnectStateLearner: Reconnect data usage report {"dataMegabytes":0.0058650970458984375} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload]
node1 6m 4.116s 2025-11-05 05:52:27.446 9073 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@34bee436 finish run()
node1 6m 4.117s 2025-11-05 05:52:27.447 9074 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> TeachingSynchronizer: finished sending tree
node1 6m 4.119s 2025-11-05 05:52:27.449 9077 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> ReconnectStateTeacher: Finished synchronization in the role of the sender.
node1 6m 4.172s 2025-11-05 05:52:27.502 9078 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> ReconnectStateTeacher: Finished reconnect in the role of the sender. {"receiving":false,"nodeId":1,"otherNodeId":4,"round":781} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6m 4.195s 2025-11-05 05:52:27.525 922 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> ReconnectStatePeerProtocol: Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":1,"round":781} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6m 4.197s 2025-11-05 05:52:27.527 923 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> ReconnectStatePeerProtocol: Information for state received during reconnect:
Round: 781 Timestamp: 2025-11-05T05:52:25.752538Z Next consensus number: 23329 Legacy running event hash: 262fbe194af515c78e84ed80d9d2b899968ebf4ade4b61966c0fd188c8618756e699164613a0f98d13d309353c6825db Legacy running event mnemonic: shine-diagram-gadget-rib Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1508808554 Root hash: d057ba01fd11b8e0dde33828748906e7538122a9461e1feef256de4d86d4e61d616f9d692a9cd0805967d1b0af92e425 (root) VirtualMap state / vivid-say-fuel-tomato
node4 6m 4.198s 2025-11-05 05:52:27.528 924 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: A state was obtained from a peer
node4 6m 4.200s 2025-11-05 05:52:27.530 925 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: The state obtained from a peer was validated
node4 6m 4.201s 2025-11-05 05:52:27.531 927 DEBUG RECONNECT <<platform-core: reconnectController>> ReconnectController: `loadState` : reloading state
node4 6m 4.202s 2025-11-05 05:52:27.532 928 INFO STARTUP <<platform-core: reconnectController>> ConsistencyTestingToolState: State initialized with state long -6626601352079018391.
node4 6m 4.202s 2025-11-05 05:52:27.532 929 INFO STARTUP <<platform-core: reconnectController>> ConsistencyTestingToolState: State initialized with 780 rounds handled.
node4 6m 4.202s 2025-11-05 05:52:27.532 930 INFO STARTUP <<platform-core: reconnectController>> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6m 4.203s 2025-11-05 05:52:27.533 931 INFO STARTUP <<platform-core: reconnectController>> TransactionHandlingHistory: Log file found. Parsing previous history
node4 6m 4.217s 2025-11-05 05:52:27.547 936 INFO STATE_TO_DISK <<platform-core: reconnectController>> DefaultSavedStateController: Signed state from round 781 created, will eventually be written to disk, for reason: RECONNECT
node4 6m 4.217s 2025-11-05 05:52:27.547 937 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 901.0 ms in BEHIND. Now in RECONNECT_COMPLETE
node4 6m 4.218s 2025-11-05 05:52:27.548 938 INFO STARTUP <platformForkJoinThread-7> Shadowgraph: Shadowgraph starting from expiration threshold 754
node4 6m 4.219s 2025-11-05 05:52:27.549 941 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 781 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/781
node4 6m 4.221s 2025-11-05 05:52:27.551 942 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 781
node4 6m 4.229s 2025-11-05 05:52:27.559 948 INFO EVENT_STREAM <<platform-core: reconnectController>> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 262fbe194af515c78e84ed80d9d2b899968ebf4ade4b61966c0fd188c8618756e699164613a0f98d13d309353c6825db
node4 6m 4.230s 2025-11-05 05:52:27.560 950 INFO STARTUP <platformForkJoinThread-5> PcesFileManager: Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr383_orgn0.pces. All future files will have an origin round of 781.
node4 6m 4.231s 2025-11-05 05:52:27.561 951 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Reconnect almost done resuming gossip
node4 6m 4.362s 2025-11-05 05:52:27.692 980 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 781
node4 6m 4.365s 2025-11-05 05:52:27.695 981 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 781 Timestamp: 2025-11-05T05:52:25.752538Z Next consensus number: 23329 Legacy running event hash: 262fbe194af515c78e84ed80d9d2b899968ebf4ade4b61966c0fd188c8618756e699164613a0f98d13d309353c6825db Legacy running event mnemonic: shine-diagram-gadget-rib Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1508808554 Root hash: d057ba01fd11b8e0dde33828748906e7538122a9461e1feef256de4d86d4e61d616f9d692a9cd0805967d1b0af92e425 (root) VirtualMap state / vivid-say-fuel-tomato
node4 6m 4.399s 2025-11-05 05:52:27.729 982 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr383_orgn0.pces
node4 6m 4.399s 2025-11-05 05:52:27.729 983 WARN STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: No preconsensus event files meeting specified criteria found to copy. Lower bound: 754
node4 6m 4.404s 2025-11-05 05:52:27.734 984 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 781 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/781 {"round":781,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/781/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 4.407s 2025-11-05 05:52:27.737 985 INFO PLATFORM_STATUS <platformForkJoinThread-7> StatusStateMachine: Platform spent 189.0 ms in RECONNECT_COMPLETE. Now in CHECKING
node4 6m 4.545s 2025-11-05 05:52:27.875 986 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 6m 4.548s 2025-11-05 05:52:27.878 987 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 6m 5.374s 2025-11-05 05:52:28.704 988 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:1d3a1b618440 BR:779), num remaining: 3
node4 6m 5.375s 2025-11-05 05:52:28.705 989 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:bcc2322e0fca BR:779), num remaining: 2
node4 6m 5.375s 2025-11-05 05:52:28.705 990 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:ea52681326d0 BR:779), num remaining: 1
node4 6m 5.375s 2025-11-05 05:52:28.705 991 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:4caf3d82cce9 BR:779), num remaining: 0
node4 6m 9.026s 2025-11-05 05:52:32.356 1113 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 4.6 s in CHECKING. Now in ACTIVE
node3 6m 37.850s 2025-11-05 05:53:01.180 9791 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 858 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 6m 37.864s 2025-11-05 05:53:01.194 9898 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 858 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 6m 37.927s 2025-11-05 05:53:01.257 10042 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 858 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 6m 37.944s 2025-11-05 05:53:01.274 1805 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 858 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 6m 37.996s 2025-11-05 05:53:01.326 10139 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 858 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 6m 38.060s 2025-11-05 05:53:01.390 10142 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 858 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/858
node2 6m 38.061s 2025-11-05 05:53:01.391 10143 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 858
node1 6m 38.107s 2025-11-05 05:53:01.437 9901 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 858 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/858
node1 6m 38.107s 2025-11-05 05:53:01.437 9902 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 858
node3 6m 38.130s 2025-11-05 05:53:01.460 9794 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 858 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/858
node3 6m 38.131s 2025-11-05 05:53:01.461 9795 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 858
node4 6m 38.132s 2025-11-05 05:53:01.462 1808 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 858 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/858
node4 6m 38.132s 2025-11-05 05:53:01.462 1809 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 858
node2 6m 38.138s 2025-11-05 05:53:01.468 10174 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 858
node2 6m 38.140s 2025-11-05 05:53:01.470 10175 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 858 Timestamp: 2025-11-05T05:53:00.307063Z Next consensus number: 25937 Legacy running event hash: 502e8e31ac70f3d3662350c1571b9372a0e7e1dc587f9657a3dfce67638b09674c5bb8f8eab1ac1f49292895058e3961 Legacy running event mnemonic: alarm-delay-improve-mad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 953261718 Root hash: 21349d4e70fe1ef4e30986c65834fc91e79ac04109b1fc36bfd23c5004e2e2e4c54bf4e745e6183268610be1bf045e0c (root) VirtualMap state / charge-film-cabin-teach
node2 6m 38.147s 2025-11-05 05:53:01.477 10176 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+50+28.937540264Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 38.150s 2025-11-05 05:53:01.480 10177 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 831 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+50+28.937540264Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 38.150s 2025-11-05 05:53:01.480 10178 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 6m 38.156s 2025-11-05 05:53:01.486 10179 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 6m 38.157s 2025-11-05 05:53:01.487 10180 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 858 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/858 {"round":858,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/858/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 6m 38.158s 2025-11-05 05:53:01.488 10181 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/169
node0 6m 38.177s 2025-11-05 05:53:01.507 10055 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 858 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/858
node0 6m 38.178s 2025-11-05 05:53:01.508 10056 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 858
node1 6m 38.185s 2025-11-05 05:53:01.515 9933 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 858
node1 6m 38.187s 2025-11-05 05:53:01.517 9934 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 858 Timestamp: 2025-11-05T05:53:00.307063Z Next consensus number: 25937 Legacy running event hash: 502e8e31ac70f3d3662350c1571b9372a0e7e1dc587f9657a3dfce67638b09674c5bb8f8eab1ac1f49292895058e3961 Legacy running event mnemonic: alarm-delay-improve-mad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 953261718 Root hash: 21349d4e70fe1ef4e30986c65834fc91e79ac04109b1fc36bfd23c5004e2e2e4c54bf4e745e6183268610be1bf045e0c (root) VirtualMap state / charge-film-cabin-teach
node1 6m 38.194s 2025-11-05 05:53:01.524 9935 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+50+29.003908476Z_seq1_minr474_maxr5474_orgn0.pces
node1 6m 38.194s 2025-11-05 05:53:01.524 9936 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 831 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+50+29.003908476Z_seq1_minr474_maxr5474_orgn0.pces
node1 6m 38.194s 2025-11-05 05:53:01.524 9937 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 6m 38.201s 2025-11-05 05:53:01.531 9938 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 6m 38.201s 2025-11-05 05:53:01.531 9939 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 858 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/858 {"round":858,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/858/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 6m 38.203s 2025-11-05 05:53:01.533 9940 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/169
node3 6m 38.212s 2025-11-05 05:53:01.542 9834 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 858
node3 6m 38.214s 2025-11-05 05:53:01.544 9835 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 858 Timestamp: 2025-11-05T05:53:00.307063Z Next consensus number: 25937 Legacy running event hash: 502e8e31ac70f3d3662350c1571b9372a0e7e1dc587f9657a3dfce67638b09674c5bb8f8eab1ac1f49292895058e3961 Legacy running event mnemonic: alarm-delay-improve-mad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 953261718 Root hash: 21349d4e70fe1ef4e30986c65834fc91e79ac04109b1fc36bfd23c5004e2e2e4c54bf4e745e6183268610be1bf045e0c (root) VirtualMap state / charge-film-cabin-teach
node3 6m 38.220s 2025-11-05 05:53:01.550 9836 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+50+29.071967607Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 6m 38.223s 2025-11-05 05:53:01.553 9837 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 831 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+50+29.071967607Z_seq1_minr474_maxr5474_orgn0.pces
node3 6m 38.223s 2025-11-05 05:53:01.553 9838 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 6m 38.230s 2025-11-05 05:53:01.560 9839 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 6m 38.231s 2025-11-05 05:53:01.561 9840 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 858 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/858 {"round":858,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/858/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 6m 38.232s 2025-11-05 05:53:01.562 9841 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/169
node4 6m 38.263s 2025-11-05 05:53:01.593 1850 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 858
node4 6m 38.265s 2025-11-05 05:53:01.595 1851 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 858 Timestamp: 2025-11-05T05:53:00.307063Z Next consensus number: 25937 Legacy running event hash: 502e8e31ac70f3d3662350c1571b9372a0e7e1dc587f9657a3dfce67638b09674c5bb8f8eab1ac1f49292895058e3961 Legacy running event mnemonic: alarm-delay-improve-mad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 953261718 Root hash: 21349d4e70fe1ef4e30986c65834fc91e79ac04109b1fc36bfd23c5004e2e2e4c54bf4e745e6183268610be1bf045e0c (root) VirtualMap state / charge-film-cabin-teach
node0 6m 38.266s 2025-11-05 05:53:01.596 10087 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 858
node0 6m 38.268s 2025-11-05 05:53:01.598 10088 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 858 Timestamp: 2025-11-05T05:53:00.307063Z Next consensus number: 25937 Legacy running event hash: 502e8e31ac70f3d3662350c1571b9372a0e7e1dc587f9657a3dfce67638b09674c5bb8f8eab1ac1f49292895058e3961 Legacy running event mnemonic: alarm-delay-improve-mad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 953261718 Root hash: 21349d4e70fe1ef4e30986c65834fc91e79ac04109b1fc36bfd23c5004e2e2e4c54bf4e745e6183268610be1bf045e0c (root) VirtualMap state / charge-film-cabin-teach
node4 6m 38.273s 2025-11-05 05:53:01.603 1852 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+52+28.136892619Z_seq1_minr754_maxr1254_orgn781.pces Last file: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr383_orgn0.pces
node4 6m 38.273s 2025-11-05 05:53:01.603 1853 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 831 File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+52+28.136892619Z_seq1_minr754_maxr1254_orgn781.pces
node4 6m 38.273s 2025-11-05 05:53:01.603 1854 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 6m 38.276s 2025-11-05 05:53:01.606 10089 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+50+29.022108715Z_seq1_minr474_maxr5474_orgn0.pces
node4 6m 38.277s 2025-11-05 05:53:01.607 1855 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 6m 38.278s 2025-11-05 05:53:01.608 1856 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 858 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/858 {"round":858,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/858/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 6m 38.279s 2025-11-05 05:53:01.609 10090 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 831 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+50+29.022108715Z_seq1_minr474_maxr5474_orgn0.pces
node0 6m 38.279s 2025-11-05 05:53:01.609 10091 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 6m 38.280s 2025-11-05 05:53:01.610 1857 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node0 6m 38.285s 2025-11-05 05:53:01.615 10092 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 6m 38.286s 2025-11-05 05:53:01.616 10093 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 858 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/858 {"round":858,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/858/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 6m 38.287s 2025-11-05 05:53:01.617 10094 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/169
node1 7m 37.994s 2025-11-05 05:54:01.324 11378 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 989 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 7m 38.011s 2025-11-05 05:54:01.341 11299 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 989 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 7m 38.015s 2025-11-05 05:54:01.345 3275 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 989 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 7m 38.069s 2025-11-05 05:54:01.399 11629 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 989 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 7m 38.078s 2025-11-05 05:54:01.408 11534 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 989 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 7m 38.169s 2025-11-05 05:54:01.499 11539 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 989 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/989
node0 7m 38.170s 2025-11-05 05:54:01.500 11540 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 989
node3 7m 38.235s 2025-11-05 05:54:01.565 11302 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 989 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/989
node3 7m 38.236s 2025-11-05 05:54:01.566 11303 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 989
node1 7m 38.240s 2025-11-05 05:54:01.570 11381 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 989 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/989
node1 7m 38.240s 2025-11-05 05:54:01.570 11382 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for round 989
node0 7m 38.251s 2025-11-05 05:54:01.581 11583 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 989
node0 7m 38.254s 2025-11-05 05:54:01.584 11584 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 989 Timestamp: 2025-11-05T05:54:00.307176Z Next consensus number: 30761 Legacy running event hash: af43fa196d1769a6e909d1566ca471690ea9292aca4641d95faf6f4c558691043320933248364658c99d8bc95d31bd55 Legacy running event mnemonic: bless-rude-pepper-crumble Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -388730970 Root hash: 8924a00b1a6b255eec1097290070591392cd53dab1feb9e5b582c891991d97f1946540f03ab1c4522589000d8cb2ad39 (root) VirtualMap state / swear-wrong-estate-profit
node0 7m 38.260s 2025-11-05 05:54:01.590 11585 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+46+39.677276419Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+50+29.022108715Z_seq1_minr474_maxr5474_orgn0.pces
node0 7m 38.260s 2025-11-05 05:54:01.590 11586 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 962 File: data/saved/preconsensus-events/0/2025/11/05/2025-11-05T05+50+29.022108715Z_seq1_minr474_maxr5474_orgn0.pces
node0 7m 38.260s 2025-11-05 05:54:01.590 11587 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 7m 38.267s 2025-11-05 05:54:01.597 3278 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 989 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/989
node4 7m 38.268s 2025-11-05 05:54:01.598 3279 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 989
node0 7m 38.270s 2025-11-05 05:54:01.600 11588 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 7m 38.270s 2025-11-05 05:54:01.600 11589 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 989 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/989 {"round":989,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/989/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 7m 38.272s 2025-11-05 05:54:01.602 11590 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/298
node2 7m 38.307s 2025-11-05 05:54:01.637 11632 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 989 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/989
node2 7m 38.307s 2025-11-05 05:54:01.637 11633 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 989
node1 7m 38.318s 2025-11-05 05:54:01.648 11415 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for round 989
node1 7m 38.319s 2025-11-05 05:54:01.649 11416 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 989 Timestamp: 2025-11-05T05:54:00.307176Z Next consensus number: 30761 Legacy running event hash: af43fa196d1769a6e909d1566ca471690ea9292aca4641d95faf6f4c558691043320933248364658c99d8bc95d31bd55 Legacy running event mnemonic: bless-rude-pepper-crumble Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -388730970 Root hash: 8924a00b1a6b255eec1097290070591392cd53dab1feb9e5b582c891991d97f1946540f03ab1c4522589000d8cb2ad39 (root) VirtualMap state / swear-wrong-estate-profit
node3 7m 38.320s 2025-11-05 05:54:01.650 11336 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 989
node3 7m 38.322s 2025-11-05 05:54:01.652 11337 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 989 Timestamp: 2025-11-05T05:54:00.307176Z Next consensus number: 30761 Legacy running event hash: af43fa196d1769a6e909d1566ca471690ea9292aca4641d95faf6f4c558691043320933248364658c99d8bc95d31bd55 Legacy running event mnemonic: bless-rude-pepper-crumble Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -388730970 Root hash: 8924a00b1a6b255eec1097290070591392cd53dab1feb9e5b582c891991d97f1946540f03ab1c4522589000d8cb2ad39 (root) VirtualMap state / swear-wrong-estate-profit
node1 7m 38.326s 2025-11-05 05:54:01.656 11417 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+46+39.483097484Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+50+29.003908476Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 38.326s 2025-11-05 05:54:01.656 11418 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 962 File: data/saved/preconsensus-events/1/2025/11/05/2025-11-05T05+50+29.003908476Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 38.326s 2025-11-05 05:54:01.656 11419 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 7m 38.330s 2025-11-05 05:54:01.660 11338 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+50+29.071967607Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+46+39.682298882Z_seq0_minr1_maxr501_orgn0.pces
node3 7m 38.330s 2025-11-05 05:54:01.660 11339 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 962 File: data/saved/preconsensus-events/3/2025/11/05/2025-11-05T05+50+29.071967607Z_seq1_minr474_maxr5474_orgn0.pces
node3 7m 38.330s 2025-11-05 05:54:01.660 11340 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 7m 38.336s 2025-11-05 05:54:01.666 11420 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 7m 38.336s 2025-11-05 05:54:01.666 11421 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 989 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/989 {"round":989,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/989/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 7m 38.338s 2025-11-05 05:54:01.668 11422 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/298
node3 7m 38.340s 2025-11-05 05:54:01.670 11341 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 7m 38.341s 2025-11-05 05:54:01.671 11342 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 989 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/989 {"round":989,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/989/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 7m 38.342s 2025-11-05 05:54:01.672 11343 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/298
node2 7m 38.393s 2025-11-05 05:54:01.723 11666 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 989
node2 7m 38.395s 2025-11-05 05:54:01.725 11667 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 989 Timestamp: 2025-11-05T05:54:00.307176Z Next consensus number: 30761 Legacy running event hash: af43fa196d1769a6e909d1566ca471690ea9292aca4641d95faf6f4c558691043320933248364658c99d8bc95d31bd55 Legacy running event mnemonic: bless-rude-pepper-crumble Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -388730970 Root hash: 8924a00b1a6b255eec1097290070591392cd53dab1feb9e5b582c891991d97f1946540f03ab1c4522589000d8cb2ad39 (root) VirtualMap state / swear-wrong-estate-profit
node4 7m 38.399s 2025-11-05 05:54:01.729 3332 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 989
node2 7m 38.401s 2025-11-05 05:54:01.731 11668 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+46+39.468093364Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+50+28.937540264Z_seq1_minr474_maxr5474_orgn0.pces
node2 7m 38.401s 2025-11-05 05:54:01.731 11669 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 962 File: data/saved/preconsensus-events/2/2025/11/05/2025-11-05T05+50+28.937540264Z_seq1_minr474_maxr5474_orgn0.pces
node4 7m 38.401s 2025-11-05 05:54:01.731 3333 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 989 Timestamp: 2025-11-05T05:54:00.307176Z Next consensus number: 30761 Legacy running event hash: af43fa196d1769a6e909d1566ca471690ea9292aca4641d95faf6f4c558691043320933248364658c99d8bc95d31bd55 Legacy running event mnemonic: bless-rude-pepper-crumble Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -388730970 Root hash: 8924a00b1a6b255eec1097290070591392cd53dab1feb9e5b582c891991d97f1946540f03ab1c4522589000d8cb2ad39 (root) VirtualMap state / swear-wrong-estate-profit
node2 7m 38.402s 2025-11-05 05:54:01.732 11670 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 7m 38.408s 2025-11-05 05:54:01.738 3334 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+52+28.136892619Z_seq1_minr754_maxr1254_orgn781.pces Last file: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+46+39.616014744Z_seq0_minr1_maxr383_orgn0.pces
node4 7m 38.408s 2025-11-05 05:54:01.738 3335 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 962 File: data/saved/preconsensus-events/4/2025/11/05/2025-11-05T05+52+28.136892619Z_seq1_minr754_maxr1254_orgn781.pces
node4 7m 38.409s 2025-11-05 05:54:01.739 3336 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 7m 38.411s 2025-11-05 05:54:01.741 11685 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 7m 38.412s 2025-11-05 05:54:01.742 11686 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 989 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/989 {"round":989,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/989/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 7m 38.413s 2025-11-05 05:54:01.743 11687 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/298
node4 7m 38.414s 2025-11-05 05:54:01.744 3337 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 7m 38.415s 2025-11-05 05:54:01.745 3338 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 989 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/989 {"round":989,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/989/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 7m 38.416s 2025-11-05 05:54:01.746 3339 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/39
node3 7m 58.346s 2025-11-05 05:54:21.676 11823 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 3 to 4>> NetworkUtils: Connection broken: 3 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-05T05:54:21.675680556Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:388) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:432) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node3 7m 58.448s 2025-11-05 05:54:21.778 11824 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith1 3 to 1>> NetworkUtils: Connection broken: 3 <- 1
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-05T05:54:21.774157668Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:388) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:432) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node3 7m 58.482s 2025-11-05 05:54:21.812 11825 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith0 3 to 0>> NetworkUtils: Connection broken: 3 <- 0
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-05T05:54:21.809498058Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:388) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:432) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node3 7m 58.562s 2025-11-05 05:54:21.892 11826 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith2 3 to 2>> NetworkUtils: Connection broken: 3 <- 2
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-11-05T05:54:21.892041089Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:70) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:388) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:432) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more