Node ID







Columns











Log Level





Log Marker







Class


















































node0 0.000ns 2025-10-08 17:09:31.072 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node0 92.000ms 2025-10-08 17:09:31.164 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node0 108.000ms 2025-10-08 17:09:31.180 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 226.000ms 2025-10-08 17:09:31.298 4 INFO STARTUP <main> Browser: The following nodes [0] are set to run locally
node0 233.000ms 2025-10-08 17:09:31.305 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node0 245.000ms 2025-10-08 17:09:31.317 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 558.000ms 2025-10-08 17:09:31.630 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 650.000ms 2025-10-08 17:09:31.722 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node0 664.000ms 2025-10-08 17:09:31.736 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node0 665.000ms 2025-10-08 17:09:31.737 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 666.000ms 2025-10-08 17:09:31.738 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 748.000ms 2025-10-08 17:09:31.820 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 785.000ms 2025-10-08 17:09:31.857 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 793.000ms 2025-10-08 17:09:31.865 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node4 806.000ms 2025-10-08 17:09:31.878 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 839.000ms 2025-10-08 17:09:31.911 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node2 855.000ms 2025-10-08 17:09:31.927 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 967.000ms 2025-10-08 17:09:32.039 4 INFO STARTUP <main> Browser: The following nodes [2] are set to run locally
node2 973.000ms 2025-10-08 17:09:32.045 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node2 986.000ms 2025-10-08 17:09:32.058 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 1.256s 2025-10-08 17:09:32.328 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node4 1.257s 2025-10-08 17:09:32.329 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node2 1.409s 2025-10-08 17:09:32.481 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node2 1.410s 2025-10-08 17:09:32.482 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 1.536s 2025-10-08 17:09:32.608 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 870ms
node0 1.547s 2025-10-08 17:09:32.619 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node0 1.551s 2025-10-08 17:09:32.623 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 1.593s 2025-10-08 17:09:32.665 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 1.655s 2025-10-08 17:09:32.727 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node0 1.656s 2025-10-08 17:09:32.728 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node1 1.963s 2025-10-08 17:09:33.035 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node1 2.073s 2025-10-08 17:09:33.145 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node1 2.093s 2025-10-08 17:09:33.165 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 2.246s 2025-10-08 17:09:33.318 4 INFO STARTUP <main> Browser: The following nodes [1] are set to run locally
node1 2.253s 2025-10-08 17:09:33.325 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node1 2.267s 2025-10-08 17:09:33.339 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 2.291s 2025-10-08 17:09:33.363 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 881ms
node2 2.300s 2025-10-08 17:09:33.372 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node2 2.303s 2025-10-08 17:09:33.375 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 2.342s 2025-10-08 17:09:33.414 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 2.393s 2025-10-08 17:09:33.465 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1135ms
node4 2.403s 2025-10-08 17:09:33.475 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 2.407s 2025-10-08 17:09:33.479 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 2.414s 2025-10-08 17:09:33.486 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node2 2.415s 2025-10-08 17:09:33.487 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 2.452s 2025-10-08 17:09:33.524 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 2.519s 2025-10-08 17:09:33.591 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 2.520s 2025-10-08 17:09:33.592 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node3 2.530s 2025-10-08 17:09:33.602 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node3 2.626s 2025-10-08 17:09:33.698 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node3 2.642s 2025-10-08 17:09:33.714 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 2.722s 2025-10-08 17:09:33.794 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node1 2.723s 2025-10-08 17:09:33.795 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node3 2.766s 2025-10-08 17:09:33.838 4 INFO STARTUP <main> Browser: The following nodes [3] are set to run locally
node3 2.774s 2025-10-08 17:09:33.846 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node3 2.788s 2025-10-08 17:09:33.860 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 3.256s 2025-10-08 17:09:34.328 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node3 3.257s 2025-10-08 17:09:34.329 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 3.666s 2025-10-08 17:09:34.738 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 3.746s 2025-10-08 17:09:34.818 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 3.748s 2025-10-08 17:09:34.820 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node0 3.749s 2025-10-08 17:09:34.821 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 3.827s 2025-10-08 17:09:34.899 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1102ms
node1 3.836s 2025-10-08 17:09:34.908 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node1 3.840s 2025-10-08 17:09:34.912 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 3.885s 2025-10-08 17:09:34.957 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node1 3.953s 2025-10-08 17:09:35.025 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node1 3.954s 2025-10-08 17:09:35.026 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node3 4.243s 2025-10-08 17:09:35.315 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 986ms
node3 4.252s 2025-10-08 17:09:35.324 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node3 4.256s 2025-10-08 17:09:35.328 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 4.296s 2025-10-08 17:09:35.368 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node3 4.357s 2025-10-08 17:09:35.429 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node3 4.358s 2025-10-08 17:09:35.430 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node2 4.472s 2025-10-08 17:09:35.544 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 4.505s 2025-10-08 17:09:35.577 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 4.508s 2025-10-08 17:09:35.580 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 4.510s 2025-10-08 17:09:35.582 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node0 4.515s 2025-10-08 17:09:35.587 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node0 4.526s 2025-10-08 17:09:35.598 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 4.528s 2025-10-08 17:09:35.600 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.565s 2025-10-08 17:09:35.637 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.569s 2025-10-08 17:09:35.641 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node2 4.570s 2025-10-08 17:09:35.642 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 4.588s 2025-10-08 17:09:35.660 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 4.590s 2025-10-08 17:09:35.662 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node4 4.591s 2025-10-08 17:09:35.663 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 5.353s 2025-10-08 17:09:36.425 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 5.355s 2025-10-08 17:09:36.427 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node2 5.360s 2025-10-08 17:09:36.432 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node2 5.370s 2025-10-08 17:09:36.442 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 5.372s 2025-10-08 17:09:36.444 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5.393s 2025-10-08 17:09:36.465 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5.395s 2025-10-08 17:09:36.467 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 5.401s 2025-10-08 17:09:36.473 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node4 5.413s 2025-10-08 17:09:36.485 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5.415s 2025-10-08 17:09:36.487 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 5.626s 2025-10-08 17:09:36.698 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26399230] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=130629, randomLong=-5679396418975678968, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=9460, randomLong=-4296585529488174881, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1122928, data=35, exception=null] OS Health Check Report - Complete (took 1021 ms)
node0 5.657s 2025-10-08 17:09:36.729 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node0 5.664s 2025-10-08 17:09:36.736 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node0 5.667s 2025-10-08 17:09:36.739 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node0 5.748s 2025-10-08 17:09:36.820 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih7XpQ==", "port": 30124 }, { "ipAddressV4": "CoAAFQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ijfjtw==", "port": 30125 }, { "ipAddressV4": "CoAAEw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "iHFp6Q==", "port": 30126 }, { "ipAddressV4": "CoAAEg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IqsCkA==", "port": 30127 }, { "ipAddressV4": "CoAADw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "aMW9Aw==", "port": 30128 }, { "ipAddressV4": "CoAACw==", "port": 30128 }] }] }
node0 5.770s 2025-10-08 17:09:36.842 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv
node0 5.770s 2025-10-08 17:09:36.842 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node0 5.785s 2025-10-08 17:09:36.857 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 15c3434f2c6c307d961071b99a6d75d2913d49b399053546d5a91d65ca8564ed71fa78726db2d07d6dfdba79d15b3798 (root) ConsistencyTestingToolState / reveal-hazard-mirror-autumn 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft
node0 6.012s 2025-10-08 17:09:37.084 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node0 6.016s 2025-10-08 17:09:37.088 46 INFO STARTUP <platformForkJoinThread-1> Shadowgraph: Shadowgraph starting from expiration threshold 1
node0 6.021s 2025-10-08 17:09:37.093 47 INFO STARTUP <<start-node-0>> ConsistencyTestingToolMain: init called in Main for node 0.
node0 6.022s 2025-10-08 17:09:37.094 48 INFO STARTUP <<start-node-0>> SwirldsPlatform: Starting platform 0
node0 6.023s 2025-10-08 17:09:37.095 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node0 6.026s 2025-10-08 17:09:37.098 50 INFO STARTUP <<start-node-0>> CycleFinder: No cyclical back pressure detected in wiring model.
node0 6.027s 2025-10-08 17:09:37.099 51 INFO STARTUP <<start-node-0>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node0 6.028s 2025-10-08 17:09:37.100 52 INFO STARTUP <<start-node-0>> InputWireChecks: All input wires have been bound.
node0 6.029s 2025-10-08 17:09:37.101 53 WARN STARTUP <<start-node-0>> PcesFileTracker: No preconsensus event files available
node0 6.029s 2025-10-08 17:09:37.101 54 INFO STARTUP <<start-node-0>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node0 6.031s 2025-10-08 17:09:37.103 55 INFO STARTUP <<start-node-0>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node0 6.032s 2025-10-08 17:09:37.104 56 INFO STARTUP <<app: appMain 0>> ConsistencyTestingToolMain: run called in Main.
node0 6.036s 2025-10-08 17:09:37.108 57 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 195.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node1 6.040s 2025-10-08 17:09:37.112 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 6.041s 2025-10-08 17:09:37.113 58 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node1 6.139s 2025-10-08 17:09:37.211 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 6.142s 2025-10-08 17:09:37.214 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node1 6.143s 2025-10-08 17:09:37.215 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 6.378s 2025-10-08 17:09:37.450 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node3 6.459s 2025-10-08 17:09:37.531 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 6.461s 2025-10-08 17:09:37.533 21 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node3 6.462s 2025-10-08 17:09:37.534 22 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 6.484s 2025-10-08 17:09:37.556 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26256531] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=159740, randomLong=-8512134415293555060, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=9800, randomLong=1800687023030000414, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1288449, data=35, exception=null] OS Health Check Report - Complete (took 1026 ms)
node2 6.514s 2025-10-08 17:09:37.586 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 6.521s 2025-10-08 17:09:37.593 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 6.524s 2025-10-08 17:09:37.596 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 6.530s 2025-10-08 17:09:37.602 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26253956] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=187150, randomLong=-1188283333108021388, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=9729, randomLong=-2355049751338088068, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1510040, data=35, exception=null] OS Health Check Report - Complete (took 1025 ms)
node4 6.563s 2025-10-08 17:09:37.635 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 6.571s 2025-10-08 17:09:37.643 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 6.574s 2025-10-08 17:09:37.646 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node2 6.601s 2025-10-08 17:09:37.673 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih7XpQ==", "port": 30124 }, { "ipAddressV4": "CoAAFQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ijfjtw==", "port": 30125 }, { "ipAddressV4": "CoAAEw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "iHFp6Q==", "port": 30126 }, { "ipAddressV4": "CoAAEg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IqsCkA==", "port": 30127 }, { "ipAddressV4": "CoAADw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "aMW9Aw==", "port": 30128 }, { "ipAddressV4": "CoAACw==", "port": 30128 }] }] }
node2 6.621s 2025-10-08 17:09:37.693 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv
node2 6.622s 2025-10-08 17:09:37.694 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node2 6.635s 2025-10-08 17:09:37.707 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 15c3434f2c6c307d961071b99a6d75d2913d49b399053546d5a91d65ca8564ed71fa78726db2d07d6dfdba79d15b3798 (root) ConsistencyTestingToolState / reveal-hazard-mirror-autumn 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft
node4 6.665s 2025-10-08 17:09:37.737 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih7XpQ==", "port": 30124 }, { "ipAddressV4": "CoAAFQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ijfjtw==", "port": 30125 }, { "ipAddressV4": "CoAAEw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "iHFp6Q==", "port": 30126 }, { "ipAddressV4": "CoAAEg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IqsCkA==", "port": 30127 }, { "ipAddressV4": "CoAADw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "aMW9Aw==", "port": 30128 }, { "ipAddressV4": "CoAACw==", "port": 30128 }] }] }
node4 6.698s 2025-10-08 17:09:37.770 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6.699s 2025-10-08 17:09:37.771 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node4 6.720s 2025-10-08 17:09:37.792 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 15c3434f2c6c307d961071b99a6d75d2913d49b399053546d5a91d65ca8564ed71fa78726db2d07d6dfdba79d15b3798 (root) ConsistencyTestingToolState / reveal-hazard-mirror-autumn 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft
node2 6.850s 2025-10-08 17:09:37.922 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node2 6.855s 2025-10-08 17:09:37.927 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node2 6.861s 2025-10-08 17:09:37.933 47 INFO STARTUP <<start-node-2>> ConsistencyTestingToolMain: init called in Main for node 2.
node2 6.861s 2025-10-08 17:09:37.933 48 INFO STARTUP <<start-node-2>> SwirldsPlatform: Starting platform 2
node2 6.863s 2025-10-08 17:09:37.935 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node2 6.866s 2025-10-08 17:09:37.938 50 INFO STARTUP <<start-node-2>> CycleFinder: No cyclical back pressure detected in wiring model.
node2 6.867s 2025-10-08 17:09:37.939 51 INFO STARTUP <<start-node-2>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node2 6.868s 2025-10-08 17:09:37.940 52 INFO STARTUP <<start-node-2>> InputWireChecks: All input wires have been bound.
node2 6.870s 2025-10-08 17:09:37.942 53 WARN STARTUP <<start-node-2>> PcesFileTracker: No preconsensus event files available
node2 6.871s 2025-10-08 17:09:37.943 54 INFO STARTUP <<start-node-2>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node2 6.872s 2025-10-08 17:09:37.944 55 INFO STARTUP <<start-node-2>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node2 6.873s 2025-10-08 17:09:37.945 56 INFO STARTUP <<app: appMain 2>> ConsistencyTestingToolMain: run called in Main.
node2 6.875s 2025-10-08 17:09:37.947 57 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 186.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node2 6.880s 2025-10-08 17:09:37.952 58 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 6.957s 2025-10-08 17:09:38.029 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node4 6.964s 2025-10-08 17:09:38.036 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node4 6.971s 2025-10-08 17:09:38.043 47 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 6.971s 2025-10-08 17:09:38.043 48 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 6.972s 2025-10-08 17:09:38.044 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 6.976s 2025-10-08 17:09:38.048 50 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 6.977s 2025-10-08 17:09:38.049 51 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 6.977s 2025-10-08 17:09:38.049 52 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 6.979s 2025-10-08 17:09:38.051 53 WARN STARTUP <<start-node-4>> PcesFileTracker: No preconsensus event files available
node4 6.979s 2025-10-08 17:09:38.051 54 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node4 6.981s 2025-10-08 17:09:38.053 55 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node4 6.982s 2025-10-08 17:09:38.054 56 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 6.984s 2025-10-08 17:09:38.056 57 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 186.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 6.988s 2025-10-08 17:09:38.060 58 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node1 7.016s 2025-10-08 17:09:38.088 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 7.018s 2025-10-08 17:09:38.090 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node1 7.026s 2025-10-08 17:09:38.098 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node1 7.040s 2025-10-08 17:09:38.112 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 7.042s 2025-10-08 17:09:38.114 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 7.308s 2025-10-08 17:09:38.380 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 7.310s 2025-10-08 17:09:38.382 32 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node3 7.315s 2025-10-08 17:09:38.387 33 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node3 7.325s 2025-10-08 17:09:38.397 34 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 7.327s 2025-10-08 17:09:38.399 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 8.176s 2025-10-08 17:09:39.248 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26330969] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=246340, randomLong=6489068894666805480, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=53160, randomLong=-3318368198702935174, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1524260, data=35, exception=null] OS Health Check Report - Complete (took 1029 ms)
node1 8.217s 2025-10-08 17:09:39.289 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node1 8.227s 2025-10-08 17:09:39.299 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node1 8.231s 2025-10-08 17:09:39.303 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node1 8.331s 2025-10-08 17:09:39.403 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih7XpQ==", "port": 30124 }, { "ipAddressV4": "CoAAFQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ijfjtw==", "port": 30125 }, { "ipAddressV4": "CoAAEw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "iHFp6Q==", "port": 30126 }, { "ipAddressV4": "CoAAEg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IqsCkA==", "port": 30127 }, { "ipAddressV4": "CoAADw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "aMW9Aw==", "port": 30128 }, { "ipAddressV4": "CoAACw==", "port": 30128 }] }] }
node1 8.358s 2025-10-08 17:09:39.430 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv
node1 8.358s 2025-10-08 17:09:39.430 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node1 8.375s 2025-10-08 17:09:39.447 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 15c3434f2c6c307d961071b99a6d75d2913d49b399053546d5a91d65ca8564ed71fa78726db2d07d6dfdba79d15b3798 (root) ConsistencyTestingToolState / reveal-hazard-mirror-autumn 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft
node3 8.438s 2025-10-08 17:09:39.510 36 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26242310] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=217490, randomLong=6861635022219821488, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=12750, randomLong=-5771704551271315586, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1480380, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms)
node3 8.470s 2025-10-08 17:09:39.542 37 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node3 8.480s 2025-10-08 17:09:39.552 38 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node3 8.483s 2025-10-08 17:09:39.555 39 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node3 8.568s 2025-10-08 17:09:39.640 40 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih7XpQ==", "port": 30124 }, { "ipAddressV4": "CoAAFQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ijfjtw==", "port": 30125 }, { "ipAddressV4": "CoAAEw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "iHFp6Q==", "port": 30126 }, { "ipAddressV4": "CoAAEg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IqsCkA==", "port": 30127 }, { "ipAddressV4": "CoAADw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "aMW9Aw==", "port": 30128 }, { "ipAddressV4": "CoAACw==", "port": 30128 }] }] }
node3 8.593s 2025-10-08 17:09:39.665 41 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv
node3 8.594s 2025-10-08 17:09:39.666 42 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node3 8.610s 2025-10-08 17:09:39.682 43 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 15c3434f2c6c307d961071b99a6d75d2913d49b399053546d5a91d65ca8564ed71fa78726db2d07d6dfdba79d15b3798 (root) ConsistencyTestingToolState / reveal-hazard-mirror-autumn 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 method-topple-elite-gate 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft
node1 8.619s 2025-10-08 17:09:39.691 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node1 8.625s 2025-10-08 17:09:39.697 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node1 8.630s 2025-10-08 17:09:39.702 47 INFO STARTUP <<start-node-1>> ConsistencyTestingToolMain: init called in Main for node 1.
node1 8.631s 2025-10-08 17:09:39.703 48 INFO STARTUP <<start-node-1>> SwirldsPlatform: Starting platform 1
node1 8.632s 2025-10-08 17:09:39.704 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node1 8.636s 2025-10-08 17:09:39.708 50 INFO STARTUP <<start-node-1>> CycleFinder: No cyclical back pressure detected in wiring model.
node1 8.638s 2025-10-08 17:09:39.710 51 INFO STARTUP <<start-node-1>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node1 8.638s 2025-10-08 17:09:39.710 52 INFO STARTUP <<start-node-1>> InputWireChecks: All input wires have been bound.
node1 8.640s 2025-10-08 17:09:39.712 53 WARN STARTUP <<start-node-1>> PcesFileTracker: No preconsensus event files available
node1 8.640s 2025-10-08 17:09:39.712 54 INFO STARTUP <<start-node-1>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node1 8.643s 2025-10-08 17:09:39.715 55 INFO STARTUP <<start-node-1>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node1 8.644s 2025-10-08 17:09:39.716 56 INFO STARTUP <<app: appMain 1>> ConsistencyTestingToolMain: run called in Main.
node1 8.646s 2025-10-08 17:09:39.718 57 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 207.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node1 8.652s 2025-10-08 17:09:39.724 58 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node3 8.853s 2025-10-08 17:09:39.925 45 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node3 8.859s 2025-10-08 17:09:39.931 46 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node3 8.864s 2025-10-08 17:09:39.936 47 INFO STARTUP <<start-node-3>> ConsistencyTestingToolMain: init called in Main for node 3.
node3 8.864s 2025-10-08 17:09:39.936 48 INFO STARTUP <<start-node-3>> SwirldsPlatform: Starting platform 3
node3 8.865s 2025-10-08 17:09:39.937 49 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node3 8.868s 2025-10-08 17:09:39.940 50 INFO STARTUP <<start-node-3>> CycleFinder: No cyclical back pressure detected in wiring model.
node3 8.869s 2025-10-08 17:09:39.941 51 INFO STARTUP <<start-node-3>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node3 8.870s 2025-10-08 17:09:39.942 52 INFO STARTUP <<start-node-3>> InputWireChecks: All input wires have been bound.
node3 8.871s 2025-10-08 17:09:39.943 53 WARN STARTUP <<start-node-3>> PcesFileTracker: No preconsensus event files available
node3 8.872s 2025-10-08 17:09:39.944 54 INFO STARTUP <<start-node-3>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node3 8.874s 2025-10-08 17:09:39.946 55 INFO STARTUP <<start-node-3>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node3 8.875s 2025-10-08 17:09:39.947 56 INFO STARTUP <<app: appMain 3>> ConsistencyTestingToolMain: run called in Main.
node3 8.876s 2025-10-08 17:09:39.948 57 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 205.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node3 8.881s 2025-10-08 17:09:39.953 58 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node0 9.033s 2025-10-08 17:09:40.105 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ]
node0 9.035s 2025-10-08 17:09:40.107 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node2 9.875s 2025-10-08 17:09:40.947 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ]
node2 9.879s 2025-10-08 17:09:40.951 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 9.986s 2025-10-08 17:09:41.058 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 9.989s 2025-10-08 17:09:41.061 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node1 11.642s 2025-10-08 17:09:42.714 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ]
node1 11.645s 2025-10-08 17:09:42.717 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 11.880s 2025-10-08 17:09:42.952 59 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ]
node3 11.883s 2025-10-08 17:09:42.955 60 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node0 16.130s 2025-10-08 17:09:47.202 61 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node2 16.969s 2025-10-08 17:09:48.041 61 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node4 17.079s 2025-10-08 17:09:48.151 61 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 18.740s 2025-10-08 17:09:49.812 61 INFO PLATFORM_STATUS <platformForkJoinThread-7> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node3 18.970s 2025-10-08 17:09:50.042 61 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node0 20.512s 2025-10-08 17:09:51.584 62 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 4.4 s in CHECKING. Now in ACTIVE
node0 20.514s 2025-10-08 17:09:51.586 64 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node3 20.535s 2025-10-08 17:09:51.607 63 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node2 20.557s 2025-10-08 17:09:51.629 63 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 3.6 s in CHECKING. Now in ACTIVE
node2 20.557s 2025-10-08 17:09:51.629 64 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node1 20.582s 2025-10-08 17:09:51.654 62 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 1.8 s in CHECKING. Now in ACTIVE
node1 20.586s 2025-10-08 17:09:51.658 64 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node4 20.620s 2025-10-08 17:09:51.692 62 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 3.5 s in CHECKING. Now in ACTIVE
node4 20.622s 2025-10-08 17:09:51.694 64 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node2 20.695s 2025-10-08 17:09:51.767 83 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1
node2 20.697s 2025-10-08 17:09:51.769 84 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node1 20.716s 2025-10-08 17:09:51.788 83 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1
node1 20.718s 2025-10-08 17:09:51.790 84 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node3 20.743s 2025-10-08 17:09:51.815 82 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1
node3 20.745s 2025-10-08 17:09:51.817 83 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node4 20.770s 2025-10-08 17:09:51.842 83 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1
node0 20.771s 2025-10-08 17:09:51.843 83 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1
node4 20.772s 2025-10-08 17:09:51.844 84 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node0 20.773s 2025-10-08 17:09:51.845 84 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node2 20.938s 2025-10-08 17:09:52.010 119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node2 20.942s 2025-10-08 17:09:52.014 120 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-08T17:09:48.161183756Z Next consensus number: 1 Legacy running event hash: 0396ec12547177baf089dd4c3724ce39afdfd3e22f786f76699b48b94182e25a936beb39bcee46eedf7ae507c141aa46 Legacy running event mnemonic: index-lava-mixed-warrior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: e464074c368e28e3cf0a91601f6dee20cf8b11062d6e75bd9c4a89107de15351217bc1636878507b45a36e39c2bf4e3c (root) ConsistencyTestingToolState / inform-trumpet-spike-broccoli 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cargo-huge-ceiling-engine 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node3 20.945s 2025-10-08 17:09:52.017 114 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 2.0 s in CHECKING. Now in ACTIVE
node1 20.965s 2025-10-08 17:09:52.037 119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node1 20.968s 2025-10-08 17:09:52.040 120 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-08T17:09:48.161183756Z Next consensus number: 1 Legacy running event hash: 0396ec12547177baf089dd4c3724ce39afdfd3e22f786f76699b48b94182e25a936beb39bcee46eedf7ae507c141aa46 Legacy running event mnemonic: index-lava-mixed-warrior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: e464074c368e28e3cf0a91601f6dee20cf8b11062d6e75bd9c4a89107de15351217bc1636878507b45a36e39c2bf4e3c (root) ConsistencyTestingToolState / inform-trumpet-spike-broccoli 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cargo-huge-ceiling-engine 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node2 20.980s 2025-10-08 17:09:52.052 123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 20.980s 2025-10-08 17:09:52.052 124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 20.981s 2025-10-08 17:09:52.053 125 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 20.982s 2025-10-08 17:09:52.054 126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 20.988s 2025-10-08 17:09:52.060 127 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 20.995s 2025-10-08 17:09:52.067 121 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node0 20.998s 2025-10-08 17:09:52.070 121 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node3 20.999s 2025-10-08 17:09:52.071 122 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-08T17:09:48.161183756Z Next consensus number: 1 Legacy running event hash: 0396ec12547177baf089dd4c3724ce39afdfd3e22f786f76699b48b94182e25a936beb39bcee46eedf7ae507c141aa46 Legacy running event mnemonic: index-lava-mixed-warrior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: e464074c368e28e3cf0a91601f6dee20cf8b11062d6e75bd9c4a89107de15351217bc1636878507b45a36e39c2bf4e3c (root) ConsistencyTestingToolState / inform-trumpet-spike-broccoli 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cargo-huge-ceiling-engine 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node0 21.001s 2025-10-08 17:09:52.073 122 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-08T17:09:48.161183756Z Next consensus number: 1 Legacy running event hash: 0396ec12547177baf089dd4c3724ce39afdfd3e22f786f76699b48b94182e25a936beb39bcee46eedf7ae507c141aa46 Legacy running event mnemonic: index-lava-mixed-warrior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: e464074c368e28e3cf0a91601f6dee20cf8b11062d6e75bd9c4a89107de15351217bc1636878507b45a36e39c2bf4e3c (root) ConsistencyTestingToolState / inform-trumpet-spike-broccoli 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cargo-huge-ceiling-engine 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node1 21.009s 2025-10-08 17:09:52.081 121 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 21.010s 2025-10-08 17:09:52.082 123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 21.010s 2025-10-08 17:09:52.082 124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 21.011s 2025-10-08 17:09:52.083 126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 21.017s 2025-10-08 17:09:52.089 127 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 21.019s 2025-10-08 17:09:52.091 119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for round 1
node4 21.023s 2025-10-08 17:09:52.095 120 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1 Timestamp: 2025-10-08T17:09:48.161183756Z Next consensus number: 1 Legacy running event hash: 0396ec12547177baf089dd4c3724ce39afdfd3e22f786f76699b48b94182e25a936beb39bcee46eedf7ae507c141aa46 Legacy running event mnemonic: index-lava-mixed-warrior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: e464074c368e28e3cf0a91601f6dee20cf8b11062d6e75bd9c4a89107de15351217bc1636878507b45a36e39c2bf4e3c (root) ConsistencyTestingToolState / inform-trumpet-spike-broccoli 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 cargo-huge-ceiling-engine 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 1931016930446315563 /3 almost-atom-novel-view 4 StringLeaf 1 /4 wreck-whale-old-bottom
node3 21.034s 2025-10-08 17:09:52.106 123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 21.035s 2025-10-08 17:09:52.107 124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 21.035s 2025-10-08 17:09:52.107 125 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 21.036s 2025-10-08 17:09:52.108 126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 21.038s 2025-10-08 17:09:52.110 123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 21.039s 2025-10-08 17:09:52.111 124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 21.039s 2025-10-08 17:09:52.111 125 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 21.040s 2025-10-08 17:09:52.112 126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 21.042s 2025-10-08 17:09:52.114 127 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 21.045s 2025-10-08 17:09:52.117 127 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 21.063s 2025-10-08 17:09:52.135 123 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces
node4 21.063s 2025-10-08 17:09:52.135 124 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces
node4 21.064s 2025-10-08 17:09:52.136 125 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 21.065s 2025-10-08 17:09:52.137 126 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 21.072s 2025-10-08 17:09:52.144 127 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 30.058s 2025-10-08 17:10:01.130 340 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 24 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 30.162s 2025-10-08 17:10:01.234 340 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 24 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 30.242s 2025-10-08 17:10:01.314 344 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 24 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 30.256s 2025-10-08 17:10:01.328 346 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 24 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 30.272s 2025-10-08 17:10:01.344 346 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 24 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 30.382s 2025-10-08 17:10:01.454 348 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 24 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/24
node1 30.384s 2025-10-08 17:10:01.456 349 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node3 30.438s 2025-10-08 17:10:01.510 348 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 24 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/24
node3 30.439s 2025-10-08 17:10:01.511 349 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node0 30.471s 2025-10-08 17:10:01.543 352 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 24 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/24
node0 30.472s 2025-10-08 17:10:01.544 353 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node2 30.508s 2025-10-08 17:10:01.580 356 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 24 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/24
node2 30.509s 2025-10-08 17:10:01.581 357 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node1 30.520s 2025-10-08 17:10:01.592 400 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node1 30.523s 2025-10-08 17:10:01.595 401 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 24 Timestamp: 2025-10-08T17:10:00.102656Z Next consensus number: 790 Legacy running event hash: b0104eedec2ef2f2ad8947ed5bc4b3b6d91e3e2d739afbbdbb652971004c788401fbb58168cfa492b59329f605cf7946 Legacy running event mnemonic: wrap-tank-kid-peace Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 281955040 Root hash: 6015e96d847d6fef183f6fd8ef7bf3c8dab3741ef87e465ff515f9a7c7d58b142d660833bc5d7ce47c57c2c40973db2e (root) ConsistencyTestingToolState / actor-sponsor-idea-worth 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 coil-what-shoulder-exit 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -465351902112457667 /3 explain-purpose-kidney-mind 4 StringLeaf 24 /4 senior-strong-climb-unaware
node3 30.528s 2025-10-08 17:10:01.600 398 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node3 30.531s 2025-10-08 17:10:01.603 399 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 24 Timestamp: 2025-10-08T17:10:00.102656Z Next consensus number: 790 Legacy running event hash: b0104eedec2ef2f2ad8947ed5bc4b3b6d91e3e2d739afbbdbb652971004c788401fbb58168cfa492b59329f605cf7946 Legacy running event mnemonic: wrap-tank-kid-peace Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 281955040 Root hash: 6015e96d847d6fef183f6fd8ef7bf3c8dab3741ef87e465ff515f9a7c7d58b142d660833bc5d7ce47c57c2c40973db2e (root) ConsistencyTestingToolState / actor-sponsor-idea-worth 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 coil-what-shoulder-exit 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -465351902112457667 /3 explain-purpose-kidney-mind 4 StringLeaf 24 /4 senior-strong-climb-unaware
node1 30.533s 2025-10-08 17:10:01.605 402 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 30.533s 2025-10-08 17:10:01.605 403 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 30.533s 2025-10-08 17:10:01.605 404 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 30.534s 2025-10-08 17:10:01.606 405 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 30.535s 2025-10-08 17:10:01.607 406 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 24 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/24 {"round":24,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/24/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 30.542s 2025-10-08 17:10:01.614 400 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 30.543s 2025-10-08 17:10:01.615 401 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 30.543s 2025-10-08 17:10:01.615 402 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 30.544s 2025-10-08 17:10:01.616 403 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 30.545s 2025-10-08 17:10:01.617 404 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 24 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/24 {"round":24,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/24/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 30.556s 2025-10-08 17:10:01.628 384 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node0 30.559s 2025-10-08 17:10:01.631 385 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 24 Timestamp: 2025-10-08T17:10:00.102656Z Next consensus number: 790 Legacy running event hash: b0104eedec2ef2f2ad8947ed5bc4b3b6d91e3e2d739afbbdbb652971004c788401fbb58168cfa492b59329f605cf7946 Legacy running event mnemonic: wrap-tank-kid-peace Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 281955040 Root hash: 6015e96d847d6fef183f6fd8ef7bf3c8dab3741ef87e465ff515f9a7c7d58b142d660833bc5d7ce47c57c2c40973db2e (root) ConsistencyTestingToolState / actor-sponsor-idea-worth 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 coil-what-shoulder-exit 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -465351902112457667 /3 explain-purpose-kidney-mind 4 StringLeaf 24 /4 senior-strong-climb-unaware
node0 30.567s 2025-10-08 17:10:01.639 386 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 30.568s 2025-10-08 17:10:01.640 387 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 30.568s 2025-10-08 17:10:01.640 388 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 30.569s 2025-10-08 17:10:01.641 389 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 30.570s 2025-10-08 17:10:01.642 390 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 24 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/24 {"round":24,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/24/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 30.594s 2025-10-08 17:10:01.666 352 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 24 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/24
node4 30.595s 2025-10-08 17:10:01.667 353 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node2 30.598s 2025-10-08 17:10:01.670 388 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node2 30.601s 2025-10-08 17:10:01.673 389 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 24 Timestamp: 2025-10-08T17:10:00.102656Z Next consensus number: 790 Legacy running event hash: b0104eedec2ef2f2ad8947ed5bc4b3b6d91e3e2d739afbbdbb652971004c788401fbb58168cfa492b59329f605cf7946 Legacy running event mnemonic: wrap-tank-kid-peace Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 281955040 Root hash: 6015e96d847d6fef183f6fd8ef7bf3c8dab3741ef87e465ff515f9a7c7d58b142d660833bc5d7ce47c57c2c40973db2e (root) ConsistencyTestingToolState / actor-sponsor-idea-worth 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 coil-what-shoulder-exit 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -465351902112457667 /3 explain-purpose-kidney-mind 4 StringLeaf 24 /4 senior-strong-climb-unaware
node2 30.611s 2025-10-08 17:10:01.683 392 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 30.611s 2025-10-08 17:10:01.683 393 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 30.612s 2025-10-08 17:10:01.684 394 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 30.613s 2025-10-08 17:10:01.685 395 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 30.613s 2025-10-08 17:10:01.685 396 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 24 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/24 {"round":24,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/24/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 30.685s 2025-10-08 17:10:01.757 386 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for round 24
node4 30.689s 2025-10-08 17:10:01.761 387 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 24 Timestamp: 2025-10-08T17:10:00.102656Z Next consensus number: 790 Legacy running event hash: b0104eedec2ef2f2ad8947ed5bc4b3b6d91e3e2d739afbbdbb652971004c788401fbb58168cfa492b59329f605cf7946 Legacy running event mnemonic: wrap-tank-kid-peace Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 281955040 Root hash: 6015e96d847d6fef183f6fd8ef7bf3c8dab3741ef87e465ff515f9a7c7d58b142d660833bc5d7ce47c57c2c40973db2e (root) ConsistencyTestingToolState / actor-sponsor-idea-worth 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 coil-what-shoulder-exit 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -465351902112457667 /3 explain-purpose-kidney-mind 4 StringLeaf 24 /4 senior-strong-climb-unaware
node4 30.700s 2025-10-08 17:10:01.772 388 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces
node4 30.701s 2025-10-08 17:10:01.773 389 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces
node4 30.701s 2025-10-08 17:10:01.773 390 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 30.702s 2025-10-08 17:10:01.774 391 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 30.703s 2025-10-08 17:10:01.775 392 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 24 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/24 {"round":24,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/24/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 29.755s 2025-10-08 17:11:00.827 1767 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 150 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 1m 29.756s 2025-10-08 17:11:00.828 1795 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 150 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 1m 29.796s 2025-10-08 17:11:00.868 1787 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 150 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 29.888s 2025-10-08 17:11:00.960 1761 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 150 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 1m 29.904s 2025-10-08 17:11:00.976 1785 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 150 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 29.947s 2025-10-08 17:11:01.019 1764 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 150 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/150
node3 1m 29.948s 2025-10-08 17:11:01.020 1765 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node1 1m 29.975s 2025-10-08 17:11:01.047 1788 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 150 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/150
node1 1m 29.976s 2025-10-08 17:11:01.048 1789 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node3 1m 30.030s 2025-10-08 17:11:01.102 1796 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node3 1m 30.032s 2025-10-08 17:11:01.104 1797 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 150 Timestamp: 2025-10-08T17:11:00.020099Z Next consensus number: 5608 Legacy running event hash: c5ba8377ee6ec178a89e6de4f84ee67f5137b760060d13cdd93bf7aafff252543cb9f0f79590de0c3108f6a0cec9cdfa Legacy running event mnemonic: emotion-muffin-crane-speed Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 860905927 Root hash: 15e7e8f09512c9988c2353f703817f0261ee91aa0d1a599a18d229fa342e1d557babcc62e5b12bd65b6715296cf89795 (root) ConsistencyTestingToolState / rude-tilt-fruit-snack 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 believe-spare-base-trap 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 3675143359951095532 /3 dinner-scorpion-physical-teach 4 StringLeaf 150 /4 dish-fringe-rotate-already
node3 1m 30.042s 2025-10-08 17:11:01.114 1798 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 30.042s 2025-10-08 17:11:01.114 1799 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 123 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 30.042s 2025-10-08 17:11:01.114 1800 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 1m 30.046s 2025-10-08 17:11:01.118 1801 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 1m 30.047s 2025-10-08 17:11:01.119 1802 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 150 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/150 {"round":150,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/150/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 1m 30.058s 2025-10-08 17:11:01.130 1828 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node1 1m 30.060s 2025-10-08 17:11:01.132 1829 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 150 Timestamp: 2025-10-08T17:11:00.020099Z Next consensus number: 5608 Legacy running event hash: c5ba8377ee6ec178a89e6de4f84ee67f5137b760060d13cdd93bf7aafff252543cb9f0f79590de0c3108f6a0cec9cdfa Legacy running event mnemonic: emotion-muffin-crane-speed Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 860905927 Root hash: 15e7e8f09512c9988c2353f703817f0261ee91aa0d1a599a18d229fa342e1d557babcc62e5b12bd65b6715296cf89795 (root) ConsistencyTestingToolState / rude-tilt-fruit-snack 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 believe-spare-base-trap 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 3675143359951095532 /3 dinner-scorpion-physical-teach 4 StringLeaf 150 /4 dish-fringe-rotate-already
node1 1m 30.070s 2025-10-08 17:11:01.142 1830 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 30.070s 2025-10-08 17:11:01.142 1831 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 123 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 30.070s 2025-10-08 17:11:01.142 1832 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 1m 30.074s 2025-10-08 17:11:01.146 1833 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 1m 30.075s 2025-10-08 17:11:01.147 1834 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 150 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/150 {"round":150,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/150/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 30.108s 2025-10-08 17:11:01.180 1780 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 150 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/150
node0 1m 30.109s 2025-10-08 17:11:01.181 1781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node2 1m 30.121s 2025-10-08 17:11:01.193 1808 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 150 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/150
node2 1m 30.122s 2025-10-08 17:11:01.194 1809 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node0 1m 30.184s 2025-10-08 17:11:01.256 1812 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node0 1m 30.186s 2025-10-08 17:11:01.258 1813 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 150 Timestamp: 2025-10-08T17:11:00.020099Z Next consensus number: 5608 Legacy running event hash: c5ba8377ee6ec178a89e6de4f84ee67f5137b760060d13cdd93bf7aafff252543cb9f0f79590de0c3108f6a0cec9cdfa Legacy running event mnemonic: emotion-muffin-crane-speed Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 860905927 Root hash: 15e7e8f09512c9988c2353f703817f0261ee91aa0d1a599a18d229fa342e1d557babcc62e5b12bd65b6715296cf89795 (root) ConsistencyTestingToolState / rude-tilt-fruit-snack 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 believe-spare-base-trap 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 3675143359951095532 /3 dinner-scorpion-physical-teach 4 StringLeaf 150 /4 dish-fringe-rotate-already
node0 1m 30.195s 2025-10-08 17:11:01.267 1815 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 30.195s 2025-10-08 17:11:01.267 1816 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 123 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 30.195s 2025-10-08 17:11:01.267 1817 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 1m 30.198s 2025-10-08 17:11:01.270 1790 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 150 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/150
node0 1m 30.199s 2025-10-08 17:11:01.271 1820 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 1m 30.199s 2025-10-08 17:11:01.271 1840 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node4 1m 30.199s 2025-10-08 17:11:01.271 1791 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node0 1m 30.200s 2025-10-08 17:11:01.272 1821 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 150 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/150 {"round":150,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/150/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 1m 30.202s 2025-10-08 17:11:01.274 1841 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 150 Timestamp: 2025-10-08T17:11:00.020099Z Next consensus number: 5608 Legacy running event hash: c5ba8377ee6ec178a89e6de4f84ee67f5137b760060d13cdd93bf7aafff252543cb9f0f79590de0c3108f6a0cec9cdfa Legacy running event mnemonic: emotion-muffin-crane-speed Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 860905927 Root hash: 15e7e8f09512c9988c2353f703817f0261ee91aa0d1a599a18d229fa342e1d557babcc62e5b12bd65b6715296cf89795 (root) ConsistencyTestingToolState / rude-tilt-fruit-snack 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 believe-spare-base-trap 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 3675143359951095532 /3 dinner-scorpion-physical-teach 4 StringLeaf 150 /4 dish-fringe-rotate-already
node2 1m 30.210s 2025-10-08 17:11:01.282 1842 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 30.210s 2025-10-08 17:11:01.282 1843 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 123 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 30.210s 2025-10-08 17:11:01.282 1844 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 1m 30.214s 2025-10-08 17:11:01.286 1845 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 1m 30.215s 2025-10-08 17:11:01.287 1846 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 150 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/150 {"round":150,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/150/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 1m 30.282s 2025-10-08 17:11:01.354 1825 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for round 150
node4 1m 30.284s 2025-10-08 17:11:01.356 1826 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 150 Timestamp: 2025-10-08T17:11:00.020099Z Next consensus number: 5608 Legacy running event hash: c5ba8377ee6ec178a89e6de4f84ee67f5137b760060d13cdd93bf7aafff252543cb9f0f79590de0c3108f6a0cec9cdfa Legacy running event mnemonic: emotion-muffin-crane-speed Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 860905927 Root hash: 15e7e8f09512c9988c2353f703817f0261ee91aa0d1a599a18d229fa342e1d557babcc62e5b12bd65b6715296cf89795 (root) ConsistencyTestingToolState / rude-tilt-fruit-snack 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 believe-spare-base-trap 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 3675143359951095532 /3 dinner-scorpion-physical-teach 4 StringLeaf 150 /4 dish-fringe-rotate-already
node4 1m 30.294s 2025-10-08 17:11:01.366 1827 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 30.294s 2025-10-08 17:11:01.366 1828 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 123 File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 30.294s 2025-10-08 17:11:01.366 1829 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 1m 30.298s 2025-10-08 17:11:01.370 1830 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 1m 30.299s 2025-10-08 17:11:01.371 1831 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 150 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/150 {"round":150,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/150/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 2m 30.285s 2025-10-08 17:12:01.357 3210 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 2m 30.384s 2025-10-08 17:12:01.456 3186 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 2m 30.427s 2025-10-08 17:12:01.499 3180 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 2m 30.446s 2025-10-08 17:12:01.518 3194 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 2m 30.454s 2025-10-08 17:12:01.526 3228 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 275 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 2m 30.613s 2025-10-08 17:12:01.685 3189 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/275
node4 2m 30.614s 2025-10-08 17:12:01.686 3190 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node3 2m 30.618s 2025-10-08 17:12:01.690 3183 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/275
node3 2m 30.619s 2025-10-08 17:12:01.691 3184 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node1 2m 30.669s 2025-10-08 17:12:01.741 3213 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/275
node1 2m 30.670s 2025-10-08 17:12:01.742 3214 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node0 2m 30.694s 2025-10-08 17:12:01.766 3197 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/275
node0 2m 30.695s 2025-10-08 17:12:01.767 3198 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node4 2m 30.702s 2025-10-08 17:12:01.774 3221 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node3 2m 30.705s 2025-10-08 17:12:01.777 3223 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node4 2m 30.705s 2025-10-08 17:12:01.777 3222 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 275 Timestamp: 2025-10-08T17:12:00.490748Z Next consensus number: 10445 Legacy running event hash: 20fb655c87b812e9c26ea497114ad4ab9e643ac72e851844c06f2aeff06dd6ca4a4e2c91f62312758ddd2ff825f78a64 Legacy running event mnemonic: buyer-process-animal-frog Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1901345333 Root hash: 42276951541649b95d1f6f87424d826eb074a29414f99cd40961eb3f6b1b8bdd3a6431d6e0f88d6e78241b993e626726 (root) ConsistencyTestingToolState / eager-clay-lizard-damp 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spare-soul-bullet-fossil 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2265026351569228580 /3 avocado-evolve-dragon-below 4 StringLeaf 275 /4 salon-table-fitness-unusual
node3 2m 30.707s 2025-10-08 17:12:01.779 3224 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 275 Timestamp: 2025-10-08T17:12:00.490748Z Next consensus number: 10445 Legacy running event hash: 20fb655c87b812e9c26ea497114ad4ab9e643ac72e851844c06f2aeff06dd6ca4a4e2c91f62312758ddd2ff825f78a64 Legacy running event mnemonic: buyer-process-animal-frog Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1901345333 Root hash: 42276951541649b95d1f6f87424d826eb074a29414f99cd40961eb3f6b1b8bdd3a6431d6e0f88d6e78241b993e626726 (root) ConsistencyTestingToolState / eager-clay-lizard-damp 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spare-soul-bullet-fossil 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2265026351569228580 /3 avocado-evolve-dragon-below 4 StringLeaf 275 /4 salon-table-fitness-unusual
node3 2m 30.714s 2025-10-08 17:12:01.786 3225 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 30.715s 2025-10-08 17:12:01.787 3226 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 246 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 30.715s 2025-10-08 17:12:01.787 3227 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 2m 30.716s 2025-10-08 17:12:01.788 3223 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 30.716s 2025-10-08 17:12:01.788 3224 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 246 File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 30.716s 2025-10-08 17:12:01.788 3225 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 2m 30.723s 2025-10-08 17:12:01.795 3228 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 2m 30.724s 2025-10-08 17:12:01.796 3229 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 30.724s 2025-10-08 17:12:01.796 3226 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 2m 30.724s 2025-10-08 17:12:01.796 3227 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 2m 30.736s 2025-10-08 17:12:01.808 3231 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 275 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/275
node2 2m 30.737s 2025-10-08 17:12:01.809 3232 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node1 2m 30.751s 2025-10-08 17:12:01.823 3245 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node1 2m 30.753s 2025-10-08 17:12:01.825 3246 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 275 Timestamp: 2025-10-08T17:12:00.490748Z Next consensus number: 10445 Legacy running event hash: 20fb655c87b812e9c26ea497114ad4ab9e643ac72e851844c06f2aeff06dd6ca4a4e2c91f62312758ddd2ff825f78a64 Legacy running event mnemonic: buyer-process-animal-frog Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1901345333 Root hash: 42276951541649b95d1f6f87424d826eb074a29414f99cd40961eb3f6b1b8bdd3a6431d6e0f88d6e78241b993e626726 (root) ConsistencyTestingToolState / eager-clay-lizard-damp 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spare-soul-bullet-fossil 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2265026351569228580 /3 avocado-evolve-dragon-below 4 StringLeaf 275 /4 salon-table-fitness-unusual
node1 2m 30.760s 2025-10-08 17:12:01.832 3247 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 30.760s 2025-10-08 17:12:01.832 3248 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 246 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 30.760s 2025-10-08 17:12:01.832 3249 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 2m 30.768s 2025-10-08 17:12:01.840 3250 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 2m 30.769s 2025-10-08 17:12:01.841 3229 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node1 2m 30.769s 2025-10-08 17:12:01.841 3251 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 2m 30.771s 2025-10-08 17:12:01.843 3230 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 275 Timestamp: 2025-10-08T17:12:00.490748Z Next consensus number: 10445 Legacy running event hash: 20fb655c87b812e9c26ea497114ad4ab9e643ac72e851844c06f2aeff06dd6ca4a4e2c91f62312758ddd2ff825f78a64 Legacy running event mnemonic: buyer-process-animal-frog Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1901345333 Root hash: 42276951541649b95d1f6f87424d826eb074a29414f99cd40961eb3f6b1b8bdd3a6431d6e0f88d6e78241b993e626726 (root) ConsistencyTestingToolState / eager-clay-lizard-damp 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spare-soul-bullet-fossil 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2265026351569228580 /3 avocado-evolve-dragon-below 4 StringLeaf 275 /4 salon-table-fitness-unusual
node0 2m 30.779s 2025-10-08 17:12:01.851 3231 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 30.779s 2025-10-08 17:12:01.851 3232 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 246 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 30.779s 2025-10-08 17:12:01.851 3233 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 2m 30.787s 2025-10-08 17:12:01.859 3234 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 2m 30.787s 2025-10-08 17:12:01.859 3235 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 2m 30.814s 2025-10-08 17:12:01.886 3263 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for round 275
node2 2m 30.816s 2025-10-08 17:12:01.888 3264 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 275 Timestamp: 2025-10-08T17:12:00.490748Z Next consensus number: 10445 Legacy running event hash: 20fb655c87b812e9c26ea497114ad4ab9e643ac72e851844c06f2aeff06dd6ca4a4e2c91f62312758ddd2ff825f78a64 Legacy running event mnemonic: buyer-process-animal-frog Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1901345333 Root hash: 42276951541649b95d1f6f87424d826eb074a29414f99cd40961eb3f6b1b8bdd3a6431d6e0f88d6e78241b993e626726 (root) ConsistencyTestingToolState / eager-clay-lizard-damp 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spare-soul-bullet-fossil 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2265026351569228580 /3 avocado-evolve-dragon-below 4 StringLeaf 275 /4 salon-table-fitness-unusual
node2 2m 30.824s 2025-10-08 17:12:01.896 3265 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 30.824s 2025-10-08 17:12:01.896 3266 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 246 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 30.824s 2025-10-08 17:12:01.896 3267 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 2m 30.832s 2025-10-08 17:12:01.904 3268 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 2m 30.832s 2025-10-08 17:12:01.904 3269 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 275 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/275 {"round":275,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/275/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 3m 29.826s 2025-10-08 17:13:00.898 4716 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 405 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 3m 29.865s 2025-10-08 17:13:00.937 4730 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 405 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 3m 29.895s 2025-10-08 17:13:00.967 4690 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 405 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 3m 29.947s 2025-10-08 17:13:01.019 4648 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 405 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 3m 30.093s 2025-10-08 17:13:01.165 4651 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 405 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/405
node3 3m 30.094s 2025-10-08 17:13:01.166 4652 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 405
node1 3m 30.099s 2025-10-08 17:13:01.171 4729 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 405 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/405
node1 3m 30.100s 2025-10-08 17:13:01.172 4730 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 405
node0 3m 30.137s 2025-10-08 17:13:01.209 4693 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 405 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/405
node0 3m 30.138s 2025-10-08 17:13:01.210 4694 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 405
node2 3m 30.164s 2025-10-08 17:13:01.236 4743 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 405 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/405
node2 3m 30.165s 2025-10-08 17:13:01.237 4744 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 405
node3 3m 30.174s 2025-10-08 17:13:01.246 4683 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 405
node3 3m 30.176s 2025-10-08 17:13:01.248 4684 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 405 Timestamp: 2025-10-08T17:13:00.090819492Z Next consensus number: 14781 Legacy running event hash: 602cd879586be437d59e8185fed1071979c22a24bb8863b344ac06de292da82882d7151b703856e10d1be5a0c96f1b63 Legacy running event mnemonic: airport-valley-aspect-junior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 106497779 Root hash: b54eefcd9e32778c47b480f4f0f840afd191884ea02243ce12a20bdd234fd945031285bb378c4a07b6904d0218daafeb (root) ConsistencyTestingToolState / film-all-degree-various 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 feel-cupboard-anxiety-warm 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2881668016996462167 /3 judge-inherit-this-pig 4 StringLeaf 405 /4 arrow-fossil-faculty-lounge
node1 3m 30.182s 2025-10-08 17:13:01.254 4761 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 405
node1 3m 30.184s 2025-10-08 17:13:01.256 4762 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 405 Timestamp: 2025-10-08T17:13:00.090819492Z Next consensus number: 14781 Legacy running event hash: 602cd879586be437d59e8185fed1071979c22a24bb8863b344ac06de292da82882d7151b703856e10d1be5a0c96f1b63 Legacy running event mnemonic: airport-valley-aspect-junior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 106497779 Root hash: b54eefcd9e32778c47b480f4f0f840afd191884ea02243ce12a20bdd234fd945031285bb378c4a07b6904d0218daafeb (root) ConsistencyTestingToolState / film-all-degree-various 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 feel-cupboard-anxiety-warm 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2881668016996462167 /3 judge-inherit-this-pig 4 StringLeaf 405 /4 arrow-fossil-faculty-lounge
node3 3m 30.185s 2025-10-08 17:13:01.257 4685 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 30.185s 2025-10-08 17:13:01.257 4686 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 378 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 30.186s 2025-10-08 17:13:01.258 4687 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 3m 30.192s 2025-10-08 17:13:01.264 4763 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 30.192s 2025-10-08 17:13:01.264 4764 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 378 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 30.192s 2025-10-08 17:13:01.264 4765 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 3m 30.196s 2025-10-08 17:13:01.268 4688 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 3m 30.197s 2025-10-08 17:13:01.269 4689 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 405 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/405 {"round":405,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/405/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 3m 30.202s 2025-10-08 17:13:01.274 4766 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 3m 30.203s 2025-10-08 17:13:01.275 4767 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 405 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/405 {"round":405,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/405/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 3m 30.219s 2025-10-08 17:13:01.291 4725 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 405
node0 3m 30.220s 2025-10-08 17:13:01.292 4726 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 405 Timestamp: 2025-10-08T17:13:00.090819492Z Next consensus number: 14781 Legacy running event hash: 602cd879586be437d59e8185fed1071979c22a24bb8863b344ac06de292da82882d7151b703856e10d1be5a0c96f1b63 Legacy running event mnemonic: airport-valley-aspect-junior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 106497779 Root hash: b54eefcd9e32778c47b480f4f0f840afd191884ea02243ce12a20bdd234fd945031285bb378c4a07b6904d0218daafeb (root) ConsistencyTestingToolState / film-all-degree-various 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 feel-cupboard-anxiety-warm 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2881668016996462167 /3 judge-inherit-this-pig 4 StringLeaf 405 /4 arrow-fossil-faculty-lounge
node0 3m 30.229s 2025-10-08 17:13:01.301 4735 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 30.229s 2025-10-08 17:13:01.301 4736 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 378 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 30.229s 2025-10-08 17:13:01.301 4737 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 3m 30.239s 2025-10-08 17:13:01.311 4738 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 3m 30.240s 2025-10-08 17:13:01.312 4739 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 405 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/405 {"round":405,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/405/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 3m 30.240s 2025-10-08 17:13:01.312 4778 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for round 405
node2 3m 30.242s 2025-10-08 17:13:01.314 4779 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 405 Timestamp: 2025-10-08T17:13:00.090819492Z Next consensus number: 14781 Legacy running event hash: 602cd879586be437d59e8185fed1071979c22a24bb8863b344ac06de292da82882d7151b703856e10d1be5a0c96f1b63 Legacy running event mnemonic: airport-valley-aspect-junior Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 106497779 Root hash: b54eefcd9e32778c47b480f4f0f840afd191884ea02243ce12a20bdd234fd945031285bb378c4a07b6904d0218daafeb (root) ConsistencyTestingToolState / film-all-degree-various 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 feel-cupboard-anxiety-warm 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2881668016996462167 /3 judge-inherit-this-pig 4 StringLeaf 405 /4 arrow-fossil-faculty-lounge
node2 3m 30.249s 2025-10-08 17:13:01.321 4780 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 30.250s 2025-10-08 17:13:01.322 4781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 378 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 30.250s 2025-10-08 17:13:01.322 4782 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 3m 30.260s 2025-10-08 17:13:01.332 4783 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 3m 30.260s 2025-10-08 17:13:01.332 4784 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 405 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/405 {"round":405,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/405/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 4m 30.166s 2025-10-08 17:14:01.238 6451 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 544 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 4m 30.197s 2025-10-08 17:14:01.269 6217 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 544 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 4m 30.206s 2025-10-08 17:14:01.278 6339 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 544 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 4m 30.233s 2025-10-08 17:14:01.305 6263 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 544 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 4m 30.376s 2025-10-08 17:14:01.448 6220 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 544 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/544
node3 4m 30.377s 2025-10-08 17:14:01.449 6221 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 544
node1 4m 30.378s 2025-10-08 17:14:01.450 6342 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 544 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/544
node1 4m 30.379s 2025-10-08 17:14:01.451 6343 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 544
node2 4m 30.421s 2025-10-08 17:14:01.493 6454 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 544 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/544
node2 4m 30.422s 2025-10-08 17:14:01.494 6455 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 544
node0 4m 30.446s 2025-10-08 17:14:01.518 6266 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 544 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/544
node0 4m 30.447s 2025-10-08 17:14:01.519 6267 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 544
node3 4m 30.452s 2025-10-08 17:14:01.524 6252 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 544
node3 4m 30.454s 2025-10-08 17:14:01.526 6253 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 544 Timestamp: 2025-10-08T17:14:00.321881Z Next consensus number: 18122 Legacy running event hash: 9abae36d495b3269c05a81e282d198f96ebbc0a3483ad9838b1244bd998cbd995bec27d6beb40d66ab2c6001f301f948 Legacy running event mnemonic: potato-mystery-clock-essay Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 533988377 Root hash: 47b22443c53722d53f4dc11d24e094d7a58b7c0af631e812c3ffed6c458c0894fd3d5e48f8fb515ff2b4b62bf2c3ab7f (root) ConsistencyTestingToolState / wagon-mask-float-follow 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 dust-actor-mango-youth 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 6093425675277220283 /3 smooth-found-mix-father 4 StringLeaf 544 /4 bamboo-panic-drip-fork
node3 4m 30.461s 2025-10-08 17:14:01.533 6254 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+13+42.625019077Z_seq1_minr473_maxr5473_orgn0.pces
node3 4m 30.461s 2025-10-08 17:14:01.533 6255 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 517 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+13+42.625019077Z_seq1_minr473_maxr5473_orgn0.pces
node3 4m 30.461s 2025-10-08 17:14:01.533 6256 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 4m 30.462s 2025-10-08 17:14:01.534 6382 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 544
node3 4m 30.462s 2025-10-08 17:14:01.534 6257 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 4m 30.462s 2025-10-08 17:14:01.534 6258 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 544 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/544 {"round":544,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/544/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 4m 30.464s 2025-10-08 17:14:01.536 6383 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 544 Timestamp: 2025-10-08T17:14:00.321881Z Next consensus number: 18122 Legacy running event hash: 9abae36d495b3269c05a81e282d198f96ebbc0a3483ad9838b1244bd998cbd995bec27d6beb40d66ab2c6001f301f948 Legacy running event mnemonic: potato-mystery-clock-essay Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 533988377 Root hash: 47b22443c53722d53f4dc11d24e094d7a58b7c0af631e812c3ffed6c458c0894fd3d5e48f8fb515ff2b4b62bf2c3ab7f (root) ConsistencyTestingToolState / wagon-mask-float-follow 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 dust-actor-mango-youth 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 6093425675277220283 /3 smooth-found-mix-father 4 StringLeaf 544 /4 bamboo-panic-drip-fork
node3 4m 30.464s 2025-10-08 17:14:01.536 6259 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1
node1 4m 30.471s 2025-10-08 17:14:01.543 6384 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+13+42.628497894Z_seq1_minr474_maxr5474_orgn0.pces
node1 4m 30.472s 2025-10-08 17:14:01.544 6385 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 517 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+13+42.628497894Z_seq1_minr474_maxr5474_orgn0.pces
node1 4m 30.472s 2025-10-08 17:14:01.544 6386 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 4m 30.473s 2025-10-08 17:14:01.545 6387 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 4m 30.473s 2025-10-08 17:14:01.545 6388 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 544 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/544 {"round":544,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/544/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 4m 30.475s 2025-10-08 17:14:01.547 6389 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1
node2 4m 30.502s 2025-10-08 17:14:01.574 6494 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 544
node2 4m 30.504s 2025-10-08 17:14:01.576 6495 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 544 Timestamp: 2025-10-08T17:14:00.321881Z Next consensus number: 18122 Legacy running event hash: 9abae36d495b3269c05a81e282d198f96ebbc0a3483ad9838b1244bd998cbd995bec27d6beb40d66ab2c6001f301f948 Legacy running event mnemonic: potato-mystery-clock-essay Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 533988377 Root hash: 47b22443c53722d53f4dc11d24e094d7a58b7c0af631e812c3ffed6c458c0894fd3d5e48f8fb515ff2b4b62bf2c3ab7f (root) ConsistencyTestingToolState / wagon-mask-float-follow 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 dust-actor-mango-youth 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 6093425675277220283 /3 smooth-found-mix-father 4 StringLeaf 544 /4 bamboo-panic-drip-fork
node2 4m 30.511s 2025-10-08 17:14:01.583 6496 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+13+42.535796006Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 4m 30.511s 2025-10-08 17:14:01.583 6497 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 517 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+13+42.535796006Z_seq1_minr474_maxr5474_orgn0.pces
node2 4m 30.511s 2025-10-08 17:14:01.583 6498 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 4m 30.512s 2025-10-08 17:14:01.584 6499 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 4m 30.512s 2025-10-08 17:14:01.584 6500 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 544 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/544 {"round":544,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/544/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 4m 30.514s 2025-10-08 17:14:01.586 6501 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1
node0 4m 30.523s 2025-10-08 17:14:01.595 6298 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for round 544
node0 4m 30.525s 2025-10-08 17:14:01.597 6299 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 544 Timestamp: 2025-10-08T17:14:00.321881Z Next consensus number: 18122 Legacy running event hash: 9abae36d495b3269c05a81e282d198f96ebbc0a3483ad9838b1244bd998cbd995bec27d6beb40d66ab2c6001f301f948 Legacy running event mnemonic: potato-mystery-clock-essay Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 533988377 Root hash: 47b22443c53722d53f4dc11d24e094d7a58b7c0af631e812c3ffed6c458c0894fd3d5e48f8fb515ff2b4b62bf2c3ab7f (root) ConsistencyTestingToolState / wagon-mask-float-follow 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 dust-actor-mango-youth 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 6093425675277220283 /3 smooth-found-mix-father 4 StringLeaf 544 /4 bamboo-panic-drip-fork
node0 4m 30.531s 2025-10-08 17:14:01.603 6300 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+13+42.655220797Z_seq1_minr474_maxr5474_orgn0.pces
node0 4m 30.531s 2025-10-08 17:14:01.603 6301 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 517 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+13+42.655220797Z_seq1_minr474_maxr5474_orgn0.pces
node0 4m 30.531s 2025-10-08 17:14:01.603 6302 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 4m 30.532s 2025-10-08 17:14:01.604 6303 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 4m 30.533s 2025-10-08 17:14:01.605 6304 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 544 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/544 {"round":544,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/544/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 4m 30.534s 2025-10-08 17:14:01.606 6305 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1
node3 5m 30.007s 2025-10-08 17:15:01.079 7846 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 682 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 5m 30.084s 2025-10-08 17:15:01.156 7868 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 682 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 5m 30.125s 2025-10-08 17:15:01.197 7906 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 682 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 5m 30.216s 2025-10-08 17:15:01.288 8040 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 682 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 5m 30.291s 2025-10-08 17:15:01.363 8043 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 682 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/682
node2 5m 30.292s 2025-10-08 17:15:01.364 8044 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 682
node1 5m 30.342s 2025-10-08 17:15:01.414 7909 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 682 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/682
node1 5m 30.343s 2025-10-08 17:15:01.415 7910 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 682
node2 5m 30.366s 2025-10-08 17:15:01.438 8075 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 682
node2 5m 30.368s 2025-10-08 17:15:01.440 8076 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 682 Timestamp: 2025-10-08T17:15:00.262953Z Next consensus number: 21435 Legacy running event hash: eb865cc6af95c20e8b647fe8a191d22377569bcf58bf6c54e334e1f63d76ff28232a78a584cbc18936bdb947f90ec92c Legacy running event mnemonic: monster-occur-aspect-salad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -311518638 Root hash: e476ee38ed6d0fc2e04dfe66bf38db2d04a91311d5d5f4907c0dfee3f9651312234b7eea790e7afb47d3982c420e062e (root) ConsistencyTestingToolState / image-decorate-fame-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 job-suit-steak-salt 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -6638342393483481435 /3 library-chapter-return-produce 4 StringLeaf 682 /4 season-giant-super-energy
node2 5m 30.374s 2025-10-08 17:15:01.446 8077 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+13+42.535796006Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 5m 30.374s 2025-10-08 17:15:01.446 8078 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 655 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+13+42.535796006Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 30.374s 2025-10-08 17:15:01.446 8079 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 5m 30.378s 2025-10-08 17:15:01.450 8080 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 5m 30.378s 2025-10-08 17:15:01.450 8081 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 682 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/682 {"round":682,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/682/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 5m 30.379s 2025-10-08 17:15:01.451 8082 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/24
node0 5m 30.412s 2025-10-08 17:15:01.484 7871 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 682 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/682
node0 5m 30.412s 2025-10-08 17:15:01.484 7872 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 682
node1 5m 30.425s 2025-10-08 17:15:01.497 7941 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 682
node1 5m 30.427s 2025-10-08 17:15:01.499 7942 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 682 Timestamp: 2025-10-08T17:15:00.262953Z Next consensus number: 21435 Legacy running event hash: eb865cc6af95c20e8b647fe8a191d22377569bcf58bf6c54e334e1f63d76ff28232a78a584cbc18936bdb947f90ec92c Legacy running event mnemonic: monster-occur-aspect-salad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -311518638 Root hash: e476ee38ed6d0fc2e04dfe66bf38db2d04a91311d5d5f4907c0dfee3f9651312234b7eea790e7afb47d3982c420e062e (root) ConsistencyTestingToolState / image-decorate-fame-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 job-suit-steak-salt 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -6638342393483481435 /3 library-chapter-return-produce 4 StringLeaf 682 /4 season-giant-super-energy
node3 5m 30.432s 2025-10-08 17:15:01.504 7859 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 682 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/682
node3 5m 30.433s 2025-10-08 17:15:01.505 7860 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 682
node1 5m 30.436s 2025-10-08 17:15:01.508 7943 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+13+42.628497894Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 30.436s 2025-10-08 17:15:01.508 7944 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 655 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+13+42.628497894Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 30.436s 2025-10-08 17:15:01.508 7945 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 5m 30.439s 2025-10-08 17:15:01.511 7948 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 5m 30.440s 2025-10-08 17:15:01.512 7955 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 682 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/682 {"round":682,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/682/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 5m 30.441s 2025-10-08 17:15:01.513 7956 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/24
node0 5m 30.489s 2025-10-08 17:15:01.561 7911 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 682
node0 5m 30.491s 2025-10-08 17:15:01.563 7912 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 682 Timestamp: 2025-10-08T17:15:00.262953Z Next consensus number: 21435 Legacy running event hash: eb865cc6af95c20e8b647fe8a191d22377569bcf58bf6c54e334e1f63d76ff28232a78a584cbc18936bdb947f90ec92c Legacy running event mnemonic: monster-occur-aspect-salad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -311518638 Root hash: e476ee38ed6d0fc2e04dfe66bf38db2d04a91311d5d5f4907c0dfee3f9651312234b7eea790e7afb47d3982c420e062e (root) ConsistencyTestingToolState / image-decorate-fame-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 job-suit-steak-salt 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -6638342393483481435 /3 library-chapter-return-produce 4 StringLeaf 682 /4 season-giant-super-energy
node0 5m 30.498s 2025-10-08 17:15:01.570 7913 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+13+42.655220797Z_seq1_minr474_maxr5474_orgn0.pces
node0 5m 30.498s 2025-10-08 17:15:01.570 7914 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 655 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+13+42.655220797Z_seq1_minr474_maxr5474_orgn0.pces
node0 5m 30.498s 2025-10-08 17:15:01.570 7915 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 5m 30.501s 2025-10-08 17:15:01.573 7916 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 5m 30.502s 2025-10-08 17:15:01.574 7917 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 682 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/682 {"round":682,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/682/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 5m 30.503s 2025-10-08 17:15:01.575 7918 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/24
node3 5m 30.511s 2025-10-08 17:15:01.583 7902 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for round 682
node3 5m 30.513s 2025-10-08 17:15:01.585 7903 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 682 Timestamp: 2025-10-08T17:15:00.262953Z Next consensus number: 21435 Legacy running event hash: eb865cc6af95c20e8b647fe8a191d22377569bcf58bf6c54e334e1f63d76ff28232a78a584cbc18936bdb947f90ec92c Legacy running event mnemonic: monster-occur-aspect-salad Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -311518638 Root hash: e476ee38ed6d0fc2e04dfe66bf38db2d04a91311d5d5f4907c0dfee3f9651312234b7eea790e7afb47d3982c420e062e (root) ConsistencyTestingToolState / image-decorate-fame-weapon 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 job-suit-steak-salt 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -6638342393483481435 /3 library-chapter-return-produce 4 StringLeaf 682 /4 season-giant-super-energy
node3 5m 30.520s 2025-10-08 17:15:01.592 7904 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+13+42.625019077Z_seq1_minr473_maxr5473_orgn0.pces
node3 5m 30.520s 2025-10-08 17:15:01.592 7905 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 655 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+13+42.625019077Z_seq1_minr473_maxr5473_orgn0.pces
node3 5m 30.521s 2025-10-08 17:15:01.593 7906 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 5m 30.524s 2025-10-08 17:15:01.596 7907 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 5m 30.524s 2025-10-08 17:15:01.596 7908 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 682 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/682 {"round":682,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/682/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 5m 30.526s 2025-10-08 17:15:01.598 7909 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/24
node4 5m 52.363s 2025-10-08 17:15:23.435 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 5m 52.455s 2025-10-08 17:15:23.527 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 5m 52.471s 2025-10-08 17:15:23.543 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 52.594s 2025-10-08 17:15:23.666 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 5m 52.601s 2025-10-08 17:15:23.673 5 INFO STARTUP <main> ConsistencyTestingToolMain: Registering ConsistencyTestingToolState with ConstructableRegistry
node4 5m 52.614s 2025-10-08 17:15:23.686 6 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 5m 53.053s 2025-10-08 17:15:24.125 9 INFO STARTUP <main> ConsistencyTestingToolMain: ConsistencyTestingToolState is registered with ConstructableRegistry
node4 5m 53.054s 2025-10-08 17:15:24.126 10 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 5m 54.209s 2025-10-08 17:15:25.281 11 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1155ms
node4 5m 54.218s 2025-10-08 17:15:25.290 12 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 5m 54.222s 2025-10-08 17:15:25.294 13 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 54.264s 2025-10-08 17:15:25.336 14 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 5m 54.328s 2025-10-08 17:15:25.400 15 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 5m 54.329s 2025-10-08 17:15:25.401 16 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 5m 56.579s 2025-10-08 17:15:27.651 17 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 5m 56.678s 2025-10-08 17:15:27.750 20 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 56.686s 2025-10-08 17:15:27.758 21 INFO STARTUP <main> StartupStateUtils: The following saved states were found on disk:
- /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/275/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/150/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/24/SignedState.swh - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/SignedState.swh
node4 5m 56.687s 2025-10-08 17:15:27.759 22 INFO STARTUP <main> StartupStateUtils: Loading latest state from disk.
node4 5m 56.687s 2025-10-08 17:15:27.759 23 INFO STARTUP <main> StartupStateUtils: Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/275/SignedState.swh
node4 5m 56.691s 2025-10-08 17:15:27.763 24 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 5m 56.696s 2025-10-08 17:15:27.768 25 INFO STATE_TO_DISK <main> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp
node4 5m 56.836s 2025-10-08 17:15:27.908 36 INFO STARTUP <main> StartupStateUtils: Loaded state's hash is the same as when it was saved.
node4 5m 56.839s 2025-10-08 17:15:27.911 37 INFO STARTUP <main> StartupStateUtils: Platform has loaded a saved state {"round":275,"consensusTimestamp":"2025-10-08T17:12:00.490748Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload]
node4 5m 56.842s 2025-10-08 17:15:27.914 40 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 56.842s 2025-10-08 17:15:27.914 42 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 5m 56.845s 2025-10-08 17:15:27.917 44 INFO STARTUP <main> AddressBookInitializer: Using the loaded state's address book and weight values.
node4 5m 56.851s 2025-10-08 17:15:27.923 45 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 56.853s 2025-10-08 17:15:27.925 46 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 57.884s 2025-10-08 17:15:28.956 47 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26003251] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=219630, randomLong=-7329204088135076871, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=7600, randomLong=4507718379081955111, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1194080, data=35, exception=null] OS Health Check Report - Complete (took 1018 ms)
node4 5m 57.909s 2025-10-08 17:15:28.981 48 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 5m 58.032s 2025-10-08 17:15:29.104 49 INFO STARTUP <main> PcesUtilities: Span compaction completed for data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 364
node4 5m 58.034s 2025-10-08 17:15:29.106 50 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 5m 58.037s 2025-10-08 17:15:29.109 51 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 5m 58.107s 2025-10-08 17:15:29.179 52 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAK05TS8KZeb1MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTEwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTEwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBoP9dI3K1PRLRK7h90D9eNCfgzuHTyJi70yDEs90XJXlE6jmgf1NE2av83VAhQHLxu8Ehc/55M9Ayx9IQc0zJLSS+IrRM9QwqoG8ZvNdRgNw+je3V/8rAK/mHId+cPnnyDplCyskyi5kWCv6kTULIewFH8/KVZwhe0/hB2+N6ujWixURrxjjGLHA6b2gPoGAb/nxiVOn+L0cWcOzcyiYShxagj0FBWV7AxKx65Ynzfe7eF0gOzBUA+IM10OM5KXJejk53Xz5KpEyGe8htO/bXFlpLdm3UzrYiIhY0oKPYKECAC1s+VAZA6i+MV0nDpqDgxHRRXD8O2arauPhEI6iVT9f05AtzElrs7U95HbpQUuP1sxkaQw+bLdMOQHHMVCgMgw2g0eDdVDAMJD7wjZ+Bs6kDc/EJELb0l1uy2GEnOZMiHkK4K1r4IyZ/ed6QpyIRKfBCNyT5IIpMoVpzRYxVXgjgFdudd8iErKyvSXHThU6nu92c+vSd+FLBFHPpb6ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdga5NYtV48uDCd4vIsmpGWpKuUHtDVDlCvzHc2ij8DxAR6OFp+hIRNEBXkzg1KS5qP8Wba5ptmGoV4f89HemP+AL3Azde+HjpYRtffdfTdQwmMbw7xJg2lKkEo11gDo5+zPZnVbfb3FsZ+IXKji0QshQBfg+ddTkFG3TJG1ttq3ZDw94RxFQivVnkj1p+Ogel/DuBNRWQobFVe5VrmJqbuwwN8AdrPae1dMrkZatF91On5+cpVLGfk96fYUhDohDt6KKQ6DdhvFk5rhd0vsHGMQq2gAW2+Or6ZVsKkHKx8CPINpJVKAdpE0tItI+loMO02jf9oRI/8cThWP1vNAeWnr0D6m275EZf/4qem/DdJ0FJIVou3P7tsq7eSdueDnj5RmcbW/vOBtvlXpD3SqsVRn6sltZ0sk24p+6ZMzopevCZEMf/nL3OzGvSadisXb39H9DgwkNLlefju1QLgHWf0TGfeNHluDgVDhU8+/1/KUGtr2SnZ5EVO1l59FWHALj", "gossipEndpoint": [{ "ipAddressV4": "Ih7XpQ==", "port": 30124 }, { "ipAddressV4": "CoAAFQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIHWg7e2Q/smQwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMjAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKr5WsBepS3+y/0/yfBjzMWje7zianEz7sszrNWV3cGu2KUlR7v2+9wp/EtX1+BdcGlTTojgFs5nEBN4lM76Cp6JjFH461yN8GSkIkpe8GZnb1w4KEjZj5UYMbq+qOUI6QmwmgLeO8RHAsS6lCP1AyGFalb2ZVJ09DcYDxCRXeFj4BqvNbtD5r5DTCtpVT4ax3eb3pzNSGsjQUG9zhyp/WcsAmwmzKdMl72tk6qF8tlAWXyzwiCujWHS0Kln0C5pyEjeFNsG299toC4pgT8juxijgseTeIFRnNHmGSeSmXpAkEELlwLKR8HOnqeiS5UXNqdbxNemx/EpJSc5rTB6kzLX24dIuRsgyIIFWx73goOzmaHUolN4xmenifoMYlSNNM07WrsvmjRC5OLc/uGhdWqhZGBCH6AJB8Cmw84QLXVdHE6LiueP1oMd7g++N4X880wJkuh0ebfV3i7etUIn0jLlM50AkRucG9kwZDJ/M4LY7FT2F85R1/o2FaB/537ARQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQB5lTkqYw0hEW+BJTFsQ8jEHfIDNRJ0kNbVuibfP+u7kzlJy15lCEi+Qw6E3d8hA1QBX3xJMxNBlrtYPrdG26hh/tOwo5Np/OfxQC5jo0Q7n7hu7aLxZRUB/q7AfdDbOun4Za6rJhT3+EsFocyARWp8bYSk3YILBMkP+2VYDRkgQidzKgKtO5yv21Y9sEgziSprc+dQb/tqn5aQZLWavFwCLwnB3t4r4qwLHkkH00Jw51uOvLeM49/t333V5Caa7wmWzMcE+KSWW0QWFRxeJrodSyjPdmDi4D8lKN5WJHSAU5L2yWIODUyWD/cvsAapTv7xXk9ja/Ssb9DpMQnM1xh0hYaESajNeL1QbGuZgPxAwrw981h7kprR2P2iMGRVGA6u4ezxmhW3s7D+yJ3+Yxs/x2J/sw65Z16mRYXRWYWHQmhgaVQjIviiAkVB6CWZo1kHl/eYaVedQzKlrTpbr3JtmwGwhYEOnrkzsC63h8/AG9gRtIAIGWGqTPWbn2pEm8M=", "gossipEndpoint": [{ "ipAddressV4": "Ijfjtw==", "port": 30125 }, { "ipAddressV4": "CoAAEw==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJg3GRFp5bT9MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCl5ut2dCleDmgEneRYpAKa9Pe2qnXzgF+BEIuTfizG2OcPQi/ltv+6HxSrJXtuWNaiX/G4iP7iBzWj2ysaAYwfYj0ezTSMLRqM9hXzVgLtW0LJEF6a8vUXPsJt4GEJkUKiYCCO1MP1NLd3y/3SVJrFhwJSPqKYm2pQNg84WfPDWSkzSneOIO4Z0uWDXgs+vzSNyChWOxVieFQhLjcELtyj6narmLox+Jdo/SxUzPuktuFB3ebNgUqWPkjljgZpl00BTmbRIVHgHfDVulo2PBpXd0VplIDgdPr5zMKdTrKCuDKey8Mft72RkPKMe9LZVZ/21+rXVEh+olvvUCySsP2RkWPUJJD90c8wKo01rZsjAOXscJKQcBYlam5XXO4ZBRYzEdxuivbkPwsOoQ83swCR3alPvwfbg11Va+zXE6sRbUM9LqkYo/M3Hwg8tSIXu8oah6csputanz867dzWwyVJEPzmiXZ6ncVDQO31QlB7RndWCqKTjOQpnpblUMsrE9MCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAnUA8+kz7L+eSOm/iVvUNYF10PKO2nZtxWWL7R1vwK/2Up765PwqxKb0eSEM4bjgvZq1GuGXs9X/Y7dos42yntXvgeUY+/2JzCnw4J5tzxytZ+IKX6DR67NjDzDzVZQfptjLQrb8E7yzml0uxsqrhNPWl57Bmfe66Kg2lD11jImeeEhExlRggFukoiUWVwRNU21Q1jMUWrg2ZwfP+6fFTgRt0WR+X5zkyYPbvI6/yv7reYGjPDuZTOFhbwG8LUTQxdttDswPjnQ606kMyninL+aNelSdV/UIII7lpr/dTvgQAnrlBaGXvdy6brh3wWEwia0FZFZcKEs6M+jZ3MrFxvlTfUIdI3jRq12L10cCDi2VhORg4JmvlM+Tk6kJeSku30ZLAVo3S7GbTdvkuesOxz3UwnF7yfOA1KYOPvhv1oLxGV5z05glsn1OBKnXMdzsKFbAYYHj81bgBni2WLuIpv3oXlai2uc4y9m8LvWAQ+h/ivyog34Ai3Pvr5ZZOFgjy", "gossipEndpoint": [{ "ipAddressV4": "iHFp6Q==", "port": 30126 }, { "ipAddressV4": "CoAAEg==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAN7hww13zBZEMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDK/bVyv0ZUeJZ4cIOImM+wmqtYjCw4jPAC549WQPPV1vG0lzSpgV+nRKqmWBexhLlKN3bsvrfNCUpKSq8meFyCtdppT1dhUOmEZcoNhLZzqxXb2HYYqRPv82tR+tbh+27WFsBOOqYrYTvr72ECD7qDOuw/Xob6KImaw/b/SIAPecMoYy25fkgYkJSETwd8HUpwssYH/JTLBF8eGjjTTMuu14ARQKeH8BXSs+jjV1+3IItXERS8ryUGDjqc5vC8ZW1kDVQbb91IDxRjqZbFyhuasocCqTAcZuiEgE8Wilwp2g1vbAUnHnvKNfiaEAHoEV6vF4lelaWhOnN2U5tnox/ns6PiDqIbOfs0pmXxjAK0vxc6oZM3TwdRtzo6cSb/AYfQdnmQzkra980kHN12r3f7PK2PzGBuVUPT7fLGA4S3vQDYO4rqcgTc/OLobtqLtdBusOFjZscfIfUW4GVWJUI1j+fwvHacxWLmyZwlQ5Q47UtrtjWpFru7CTn5S477lqMCAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAdW6AWDhT0eOJw+0O6MYngmCgkXfFsgBC/B1plaE596hHo58FHxzCNiLFdvfRj37rxujvqsDAkADWUmOzLzLHYMXu302HzDqAMNY6FZJc32y4ZDsIQpaUOAuiNHAHwFXuPRInVpCqztfJMgw4RhOhcCTEsoIJsqoIN1t4M0pEVAv6x3nJwFKZqSNOZrQ7sOW32FjwWS3kHwRsCTtqdk5n2KxU6wr/fggV3QsSPRMYro8sUfwu93mqggtswwWqfeKlsz5WiaR9aqLnb8z1R6HLvA0bcoPWzjgn8RdP+9we4z06iZ5vdBuNpwBjrCKUELWISyAoekLGGxyS8pPqYiSBRNUoaPITSuUjcCBbJ9EFvm72QgCBesbwF71KPabTPbMPhLmf+uAi+zmeu8ZeVvT6DrX9OHSkIvIEQFry9BrqOT3ce6KBHSO1HpXIetj5Wcd3WHXtz9ulBL9ikWC8eh7/+we51ucmLvFzNKznElhT2Dp+czXUVNEUjp3u/66pyRA4", "gossipEndpoint": [{ "ipAddressV4": "IqsCkA==", "port": 30127 }, { "ipAddressV4": "CoAADw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIId2RXuetTjnAwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKw8NsPZVKyXW3smNAUcoN/JOyhyJah6Y2gPOCek6hcOaLjg6DvZxLzinpLKwScrgRIIiuU/9hUsy6EiOtCuX3lT4ByKA5XdN1YQAXL1TS00vUf7BR1b+PW58gaJtepZctHvyklNxFeVT/vq2zWQa1sXeTz0OReJXO+8WWO1AjciYyv963zZ6jvk5yhWGukC5NJ2pJTjFh3+2+PivnLevNNnW6bZkdgSl3MThr78nsWlUQykvx+FNIgLlrq+4fCIFzXMeJRRXqo0MJlsxBfQaSu1arqMGE5BQWBEr51EH9UKNP3MSEntr1h1WMeJ3tZXFJ/z1xzKxsVII8HNtjynv1MoEGY4AQDUWI0CRUBv0uAmBUljGJVyqgHIPO6OGznJdkQsmSOSpqdVeNSWEd85Nh0Niorvpat9g5vUJ9zWKPWmgiww+RQiqq8jA2BiD/3n+I8UIMbMShiJUQBmnveAEWLMeV6TIuIBllGQ0a5oB+PQOZh0g+TvsjnH8/c1h1MSHQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQBYPcTDNQjigDZ2iY/nA9BhiyNJDHQBecLoADQ+5mxGrGclUcvd002ovbVhoGlcMkVyG/Am9/Y3t6zpO4pMQy7Jecfx8U8+VtKEPFkpfslCmEHT9EpnCGcS5t1KB2KcqnZ57f9uUZLBtMPem1FO6wcT71TXr4/+hGNuLpLmtKkQWARnVURR1/MHhcJ35BH1/x1VIeNc+PBpTE/blszn69Gwqt/HlpNTnYfqS0vhTCfOO07dxn8n904xb1nZ0l0k7pCQdRTTn+dYxmvQ7MlTRK5iOlOKIiRAbD8Fwyfo93cvDj6V56c7Gz+knyjz0iARMsptumqevmfiJ324HzV/12SukJokFDl9G6MjG99ucg3CQaIxa5kRVN1b5D+QMeowfj0lKTchGm/BbuO4zousIE+aWCGfy41CccTP11JW2quwjsVGufb5fvCfDXop5f9i1+95oHd2JyLmHDnJBWEqBHret4Dx8P8GyQhyZB6jkZpVwJy/ruPyrAL3kcKqUZecaOg=", "gossipEndpoint": [{ "ipAddressV4": "aMW9Aw==", "port": 30128 }, { "ipAddressV4": "CoAACw==", "port": 30128 }] }] }
node4 5m 58.125s 2025-10-08 17:15:29.197 53 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with state long -2265026351569228580.
node4 5m 58.126s 2025-10-08 17:15:29.198 54 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with 275 rounds handled.
node4 5m 58.126s 2025-10-08 17:15:29.198 55 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 5m 58.126s 2025-10-08 17:15:29.198 56 INFO STARTUP <main> TransactionHandlingHistory: Log file found. Parsing previous history
node4 5m 58.886s 2025-10-08 17:15:29.958 57 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 275 Timestamp: 2025-10-08T17:12:00.490748Z Next consensus number: 10445 Legacy running event hash: 20fb655c87b812e9c26ea497114ad4ab9e643ac72e851844c06f2aeff06dd6ca4a4e2c91f62312758ddd2ff825f78a64 Legacy running event mnemonic: buyer-process-animal-frog Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1901345333 Root hash: 42276951541649b95d1f6f87424d826eb074a29414f99cd40961eb3f6b1b8bdd3a6431d6e0f88d6e78241b993e626726 (root) ConsistencyTestingToolState / eager-clay-lizard-damp 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 spare-soul-bullet-fossil 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -2265026351569228580 /3 avocado-evolve-dragon-below 4 StringLeaf 275 /4 salon-table-fitness-unusual
node4 5m 59.145s 2025-10-08 17:15:30.217 59 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 20fb655c87b812e9c26ea497114ad4ab9e643ac72e851844c06f2aeff06dd6ca4a4e2c91f62312758ddd2ff825f78a64
node4 5m 59.161s 2025-10-08 17:15:30.233 60 INFO STARTUP <platformForkJoinThread-5> Shadowgraph: Shadowgraph starting from expiration threshold 246
node4 5m 59.168s 2025-10-08 17:15:30.240 62 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 5m 59.168s 2025-10-08 17:15:30.240 63 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 5m 59.170s 2025-10-08 17:15:30.242 64 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 5m 59.173s 2025-10-08 17:15:30.245 65 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 5m 59.174s 2025-10-08 17:15:30.246 66 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 5m 59.175s 2025-10-08 17:15:30.247 67 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 5m 59.177s 2025-10-08 17:15:30.249 68 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 246
node4 5m 59.183s 2025-10-08 17:15:30.255 69 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 205.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 5m 59.497s 2025-10-08 17:15:30.569 70 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:4 H:31133b5e89e1 BR:273), num remaining: 4
node4 5m 59.499s 2025-10-08 17:15:30.571 71 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:7a119b9f2cfd BR:273), num remaining: 3
node4 5m 59.499s 2025-10-08 17:15:30.571 72 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:63907adb6d0a BR:273), num remaining: 2
node4 5m 59.500s 2025-10-08 17:15:30.572 73 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:3cafd4832b14 BR:273), num remaining: 1
node4 5m 59.501s 2025-10-08 17:15:30.573 74 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:685a0a5d9c85 BR:273), num remaining: 0
node4 6.003m 2025-10-08 17:15:31.246 703 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 4,453 preconsensus events with max birth round 364. These events contained 6,187 transactions. 88 rounds reached consensus spanning 41.3 seconds of consensus time. The latest round to reach consensus is round 363. Replay took 996.0 milliseconds.
node4 6.003m 2025-10-08 17:15:31.248 707 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 6.003m 2025-10-08 17:15:31.250 751 INFO PLATFORM_STATUS <platformForkJoinThread-8> StatusStateMachine: Platform spent 992.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 6m 1.056s 2025-10-08 17:15:32.128 876 INFO RECONNECT <<platform-core: SyncProtocolWith3 4 to 3>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262] remote ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652]
node4 6m 1.056s 2025-10-08 17:15:32.128 877 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262] remote ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652]
node1 6m 1.126s 2025-10-08 17:15:32.198 8740 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652] remote ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262]
node2 6m 1.126s 2025-10-08 17:15:32.198 8864 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652] remote ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262]
node3 6m 1.126s 2025-10-08 17:15:32.198 8728 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652] remote ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262]
node0 6m 1.127s 2025-10-08 17:15:32.199 8744 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652] remote ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262]
node4 6m 1.195s 2025-10-08 17:15:32.267 878 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262] remote ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652]
node4 6m 1.196s 2025-10-08 17:15:32.268 879 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262] remote ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652]
node4 6m 1.202s 2025-10-08 17:15:32.274 880 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 1.0 s in OBSERVING. Now in BEHIND
node4 6m 1.203s 2025-10-08 17:15:32.275 881 INFO RECONNECT <platformForkJoinThread-8> ReconnectController: Starting ReconnectController
node4 6m 1.204s 2025-10-08 17:15:32.276 882 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Preparing for reconnect, stopping gossip
node1 6m 1.273s 2025-10-08 17:15:32.345 8749 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652] remote ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262]
node3 6m 1.273s 2025-10-08 17:15:32.345 8737 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652] remote ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262]
node4 6m 1.344s 2025-10-08 17:15:32.416 883 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262] remote ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652]
node4 6m 1.344s 2025-10-08 17:15:32.416 884 INFO RECONNECT <<platform-core: SyncProtocolWith3 4 to 3>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262] remote ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=652]
node4 6m 1.355s 2025-10-08 17:15:32.427 885 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Preparing for reconnect, start clearing queues
node4 6m 1.357s 2025-10-08 17:15:32.429 886 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectPlatformHelperImpl: Queues have been cleared
node4 6m 1.359s 2025-10-08 17:15:32.431 887 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectController: waiting for reconnect connection
node4 6m 1.359s 2025-10-08 17:15:32.431 888 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectController: acquired reconnect connection
node2 6m 1.444s 2025-10-08 17:15:32.516 8873 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Starting reconnect in the role of the sender {"receiving":false,"nodeId":2,"otherNodeId":4,"round":753} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node2 6m 1.445s 2025-10-08 17:15:32.517 8874 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: The following state will be sent to the learner:
Round: 753 Timestamp: 2025-10-08T17:15:31.181753Z Next consensus number: 23121 Legacy running event hash: 60511226ab8a020860b6aa06bb9f84aa78e25d077f135df4959f54f2e4a4a6c94ebfea579167aa68a2f0024120c67b49 Legacy running event mnemonic: any-dynamic-mom-donate Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1946087348 Root hash: 4f8d349ec664d2f37c1f8fbbd2af4f8705a4caecc7876b8bb8b4d14e30841412959f284b7fb3e3dc7010d1d044abb1dc (root) ConsistencyTestingToolState / vast-need-crime-payment 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 matter-execute-receive-hour 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 3993674496999233021 /3 soldier-spread-senior-chalk 4 StringLeaf 753 /4 where-version-civil-grape
node2 6m 1.446s 2025-10-08 17:15:32.518 8875 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Sending signatures from nodes 0, 2, 3 (signing weight = 37500000000/50000000000) for state hash 4f8d349ec664d2f37c1f8fbbd2af4f8705a4caecc7876b8bb8b4d14e30841412959f284b7fb3e3dc7010d1d044abb1dc
node2 6m 1.446s 2025-10-08 17:15:32.518 8876 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Starting synchronization in the role of the sender.
node2 6m 1.451s 2025-10-08 17:15:32.523 8877 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node2 6m 1.459s 2025-10-08 17:15:32.531 8878 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3da677b2 start run()
node4 6m 1.514s 2025-10-08 17:15:32.586 889 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":362} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node4 6m 1.516s 2025-10-08 17:15:32.588 890 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Receiving signed state signatures
node4 6m 1.517s 2025-10-08 17:15:32.589 891 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Received signatures from nodes 0, 2, 3
node4 6m 1.520s 2025-10-08 17:15:32.592 892 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls receiveTree()
node4 6m 1.520s 2025-10-08 17:15:32.592 893 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: synchronizing tree
node4 6m 1.520s 2025-10-08 17:15:32.592 894 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: receiving tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node4 6m 1.527s 2025-10-08 17:15:32.599 895 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@12c83efc start run()
node4 6m 1.530s 2025-10-08 17:15:32.602 896 INFO STARTUP <<work group learning-synchronizer: async-input-stream #0>> ConsistencyTestingToolState: New State Constructed.
node2 6m 1.612s 2025-10-08 17:15:32.684 8900 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3da677b2 finish run()
node2 6m 1.613s 2025-10-08 17:15:32.685 8901 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: finished sending tree
node2 6m 1.614s 2025-10-08 17:15:32.686 8902 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.virtualmap.VirtualMap with route [2]
node2 6m 1.615s 2025-10-08 17:15:32.687 8903 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@73bc26f4 start run()
node4 6m 1.733s 2025-10-08 17:15:32.805 920 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 6m 1.733s 2025-10-08 17:15:32.805 921 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 6m 1.734s 2025-10-08 17:15:32.806 922 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@12c83efc finish run()
node4 6m 1.735s 2025-10-08 17:15:32.807 923 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: received tree rooted at com.swirlds.demo.consistency.ConsistencyTestingToolState with route []
node4 6m 1.735s 2025-10-08 17:15:32.807 924 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route [2]
node4 6m 1.738s 2025-10-08 17:15:32.810 925 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@33a95e0c start run()
node4 6m 1.793s 2025-10-08 17:15:32.865 926 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): firstLeafPath: 1 -> 1, lastLeafPath: 1 -> 1
node4 6m 1.794s 2025-10-08 17:15:32.866 927 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): done
node4 6m 1.796s 2025-10-08 17:15:32.868 928 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 6m 1.797s 2025-10-08 17:15:32.869 929 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call nodeRemover.allNodesReceived()
node4 6m 1.797s 2025-10-08 17:15:32.869 930 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived()
node4 6m 1.797s 2025-10-08 17:15:32.869 931 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived(): done
node4 6m 1.798s 2025-10-08 17:15:32.870 932 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call root.endLearnerReconnect()
node4 6m 1.798s 2025-10-08 17:15:32.870 933 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call reconnectIterator.close()
node4 6m 1.798s 2025-10-08 17:15:32.870 934 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call setHashPrivate()
node2 6m 1.866s 2025-10-08 17:15:32.938 8904 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@73bc26f4 finish run()
node2 6m 1.867s 2025-10-08 17:15:32.939 8905 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: finished sending tree
node2 6m 1.870s 2025-10-08 17:15:32.942 8908 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Finished synchronization in the role of the sender.
node4 6m 1.956s 2025-10-08 17:15:33.028 944 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call postInit()
node4 6m 1.957s 2025-10-08 17:15:33.029 946 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: endLearnerReconnect() complete
node4 6m 1.957s 2025-10-08 17:15:33.029 947 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: close() complete
node4 6m 1.957s 2025-10-08 17:15:33.029 948 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 6m 1.957s 2025-10-08 17:15:33.029 949 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@33a95e0c finish run()
node4 6m 1.958s 2025-10-08 17:15:33.030 950 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: received tree rooted at com.swirlds.virtualmap.VirtualMap with route [2]
node4 6m 1.959s 2025-10-08 17:15:33.031 951 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: synchronization complete
node4 6m 1.959s 2025-10-08 17:15:33.031 952 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls initialize()
node4 6m 1.960s 2025-10-08 17:15:33.032 953 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: initializing tree
node4 6m 1.960s 2025-10-08 17:15:33.032 954 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: initialization complete
node4 6m 1.960s 2025-10-08 17:15:33.032 955 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls hash()
node4 6m 1.960s 2025-10-08 17:15:33.032 956 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: hashing tree
node4 6m 1.961s 2025-10-08 17:15:33.033 957 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: hashing complete
node4 6m 1.961s 2025-10-08 17:15:33.033 958 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner calls logStatistics()
node4 6m 1.964s 2025-10-08 17:15:33.036 959 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: Finished synchronization {"timeInSeconds":0.439,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":12,"leafNodes":7,"redundantLeafNodes":4,"internalNodes":5,"redundantInternalNodes":2} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload]
node4 6m 1.965s 2025-10-08 17:15:33.037 960 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: ReconnectMapMetrics: transfersFromTeacher=12; transfersFromLearner=10; internalHashes=0; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=3; leafCleanHashes=3; leafData=7; leafCleanData=4
node4 6m 1.965s 2025-10-08 17:15:33.037 961 INFO RECONNECT <<reconnect: reconnect-controller>> LearningSynchronizer: learner is done synchronizing
node4 6m 1.968s 2025-10-08 17:15:33.040 962 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectLearner: Reconnect data usage report {"dataMegabytes":0.006054878234863281} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload]
node4 6m 1.972s 2025-10-08 17:15:33.044 963 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":753,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6m 1.973s 2025-10-08 17:15:33.045 964 INFO RECONNECT <<reconnect: reconnect-controller>> ReconnectSyncHelper: Information for state received during reconnect:
Round: 753 Timestamp: 2025-10-08T17:15:31.181753Z Next consensus number: 23121 Legacy running event hash: 60511226ab8a020860b6aa06bb9f84aa78e25d077f135df4959f54f2e4a4a6c94ebfea579167aa68a2f0024120c67b49 Legacy running event mnemonic: any-dynamic-mom-donate Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1946087348 Root hash: 4f8d349ec664d2f37c1f8fbbd2af4f8705a4caecc7876b8bb8b4d14e30841412959f284b7fb3e3dc7010d1d044abb1dc (root) ConsistencyTestingToolState / vast-need-crime-payment 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 matter-execute-receive-hour 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 3993674496999233021 /3 soldier-spread-senior-chalk 4 StringLeaf 753 /4 where-version-civil-grape
node4 6m 1.974s 2025-10-08 17:15:33.046 966 DEBUG RECONNECT <<reconnect: reconnect-controller>> ReconnectStateLoader: `loadReconnectState` : reloading state
node4 6m 1.974s 2025-10-08 17:15:33.046 967 INFO STARTUP <<reconnect: reconnect-controller>> ConsistencyTestingToolState: State initialized with state long 3993674496999233021.
node4 6m 1.974s 2025-10-08 17:15:33.046 968 INFO STARTUP <<reconnect: reconnect-controller>> ConsistencyTestingToolState: State initialized with 753 rounds handled.
node4 6m 1.975s 2025-10-08 17:15:33.047 969 INFO STARTUP <<reconnect: reconnect-controller>> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6m 1.975s 2025-10-08 17:15:33.047 970 INFO STARTUP <<reconnect: reconnect-controller>> TransactionHandlingHistory: Log file found. Parsing previous history
node4 6m 1.998s 2025-10-08 17:15:33.070 977 INFO STATE_TO_DISK <<reconnect: reconnect-controller>> DefaultSavedStateController: Signed state from round 753 created, will eventually be written to disk, for reason: RECONNECT
node4 6m 1.998s 2025-10-08 17:15:33.070 978 INFO PLATFORM_STATUS <platformForkJoinThread-8> StatusStateMachine: Platform spent 795.0 ms in BEHIND. Now in RECONNECT_COMPLETE
node4 6m 2.000s 2025-10-08 17:15:33.072 981 INFO STARTUP <platformForkJoinThread-3> Shadowgraph: Shadowgraph starting from expiration threshold 726
node4 6m 2.002s 2025-10-08 17:15:33.074 982 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 753 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/753
node4 6m 2.003s 2025-10-08 17:15:33.075 983 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 753
node4 6m 2.017s 2025-10-08 17:15:33.089 995 INFO EVENT_STREAM <<reconnect: reconnect-controller>> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 60511226ab8a020860b6aa06bb9f84aa78e25d077f135df4959f54f2e4a4a6c94ebfea579167aa68a2f0024120c67b49
node4 6m 2.018s 2025-10-08 17:15:33.090 996 INFO STARTUP <platformForkJoinThread-1> PcesFileManager: Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr364_orgn0.pces. All future files will have an origin round of 753.
node2 6m 2.042s 2025-10-08 17:15:33.114 8912 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectTeacher: Finished reconnect in the role of the sender. {"receiving":false,"nodeId":2,"otherNodeId":4,"round":753,"success":false} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6m 2.152s 2025-10-08 17:15:33.224 1017 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for round 753
node4 6m 2.155s 2025-10-08 17:15:33.227 1018 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 753 Timestamp: 2025-10-08T17:15:31.181753Z Next consensus number: 23121 Legacy running event hash: 60511226ab8a020860b6aa06bb9f84aa78e25d077f135df4959f54f2e4a4a6c94ebfea579167aa68a2f0024120c67b49 Legacy running event mnemonic: any-dynamic-mom-donate Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1946087348 Root hash: 4f8d349ec664d2f37c1f8fbbd2af4f8705a4caecc7876b8bb8b4d14e30841412959f284b7fb3e3dc7010d1d044abb1dc (root) ConsistencyTestingToolState / vast-need-crime-payment 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 matter-execute-receive-hour 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 3993674496999233021 /3 soldier-spread-senior-chalk 4 StringLeaf 753 /4 where-version-civil-grape
node4 6m 2.181s 2025-10-08 17:15:33.253 1029 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 6m 2.190s 2025-10-08 17:15:33.262 1031 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 6m 2.201s 2025-10-08 17:15:33.273 1032 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr364_orgn0.pces
node4 6m 2.202s 2025-10-08 17:15:33.274 1033 WARN STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: No preconsensus event files meeting specified criteria found to copy. Lower bound: 726
node4 6m 2.209s 2025-10-08 17:15:33.281 1034 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 753 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/753 {"round":753,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/753/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 2.213s 2025-10-08 17:15:33.285 1035 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 213.0 ms in RECONNECT_COMPLETE. Now in CHECKING
node0 6m 2.220s 2025-10-08 17:15:33.292 8767 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=755,ancientThreshold=728,expiredThreshold=654] remote ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262]
node4 6m 2.291s 2025-10-08 17:15:33.363 1036 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262] remote ev=EventWindow[latestConsensusRound=755,ancientThreshold=728,expiredThreshold=654]
node4 6m 2.292s 2025-10-08 17:15:33.364 1037 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> RpcPeerHandler: Latest event window is not really falling behind, will retry sync local ev=EventWindow[latestConsensusRound=753,ancientThreshold=726,expiredThreshold=726] remote ev=EventWindow[latestConsensusRound=755,ancientThreshold=728,expiredThreshold=654]
node4 6m 3.153s 2025-10-08 17:15:34.225 1038 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:6812bb5a0da8 BR:751), num remaining: 3
node4 6m 3.154s 2025-10-08 17:15:34.226 1039 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:5c22205ab9ca BR:751), num remaining: 2
node4 6m 3.154s 2025-10-08 17:15:34.226 1040 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:8217726b149d BR:751), num remaining: 1
node4 6m 3.155s 2025-10-08 17:15:34.227 1041 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:6aedc5d74498 BR:751), num remaining: 0
node2 6m 4.103s 2025-10-08 17:15:35.175 8973 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=760,ancientThreshold=733,expiredThreshold=659] remote ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262]
node4 6m 4.173s 2025-10-08 17:15:35.245 1102 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=363,ancientThreshold=336,expiredThreshold=262] remote ev=EventWindow[latestConsensusRound=760,ancientThreshold=733,expiredThreshold=659]
node4 6m 4.174s 2025-10-08 17:15:35.246 1103 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> RpcPeerHandler: Latest event window is not really falling behind, will retry sync local ev=EventWindow[latestConsensusRound=760,ancientThreshold=733,expiredThreshold=726] remote ev=EventWindow[latestConsensusRound=760,ancientThreshold=733,expiredThreshold=659]
node4 6m 6.734s 2025-10-08 17:15:37.806 1171 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 4.5 s in CHECKING. Now in ACTIVE
node0 6m 30.324s 2025-10-08 17:16:01.396 9470 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 817 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 6m 30.333s 2025-10-08 17:16:01.405 1742 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 817 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 6m 30.347s 2025-10-08 17:16:01.419 9621 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 817 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 6m 30.347s 2025-10-08 17:16:01.419 9452 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 817 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 6m 30.427s 2025-10-08 17:16:01.499 9450 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 817 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 6m 30.511s 2025-10-08 17:16:01.583 9453 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 817 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/817
node1 6m 30.512s 2025-10-08 17:16:01.584 9454 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 817
node2 6m 30.581s 2025-10-08 17:16:01.653 9624 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 817 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/817
node2 6m 30.581s 2025-10-08 17:16:01.653 9625 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 817
node1 6m 30.596s 2025-10-08 17:16:01.668 9485 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 817
node1 6m 30.599s 2025-10-08 17:16:01.671 9486 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 817 Timestamp: 2025-10-08T17:16:00.451377174Z Next consensus number: 25326 Legacy running event hash: 59d57536bf8e18f3861eff0ea2750f21f108806d806630a7a50a6255eb9c45555b00cc9713d7966132a479ec68811f3d Legacy running event mnemonic: deny-plug-atom-bunker Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1347831166 Root hash: 0f8fb265a11268fe9276c2283f1ff9b4b8a0bf40061ec06a9812b5dc8a44b98dea341b2c050d38af4a635e0660ad2f21 (root) ConsistencyTestingToolState / vehicle-recall-tape-cabin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 rhythm-this-gasp-shoulder 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -1133748825484110249 /3 cloud-flag-castle-prepare 4 StringLeaf 817 /4 engine-mountain-face-festival
node1 6m 30.608s 2025-10-08 17:16:01.680 9487 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+13+42.628497894Z_seq1_minr474_maxr5474_orgn0.pces
node1 6m 30.609s 2025-10-08 17:16:01.681 9488 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 790 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+13+42.628497894Z_seq1_minr474_maxr5474_orgn0.pces
node1 6m 30.609s 2025-10-08 17:16:01.681 9489 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 6m 30.615s 2025-10-08 17:16:01.687 9490 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 6m 30.615s 2025-10-08 17:16:01.687 9491 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 817 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/817 {"round":817,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/817/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 6m 30.617s 2025-10-08 17:16:01.689 9492 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/150
node2 6m 30.656s 2025-10-08 17:16:01.728 9656 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for round 817
node2 6m 30.658s 2025-10-08 17:16:01.730 9657 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 817 Timestamp: 2025-10-08T17:16:00.451377174Z Next consensus number: 25326 Legacy running event hash: 59d57536bf8e18f3861eff0ea2750f21f108806d806630a7a50a6255eb9c45555b00cc9713d7966132a479ec68811f3d Legacy running event mnemonic: deny-plug-atom-bunker Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1347831166 Root hash: 0f8fb265a11268fe9276c2283f1ff9b4b8a0bf40061ec06a9812b5dc8a44b98dea341b2c050d38af4a635e0660ad2f21 (root) ConsistencyTestingToolState / vehicle-recall-tape-cabin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 rhythm-this-gasp-shoulder 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -1133748825484110249 /3 cloud-flag-castle-prepare 4 StringLeaf 817 /4 engine-mountain-face-festival
node2 6m 30.664s 2025-10-08 17:16:01.736 9658 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+13+42.535796006Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 6m 30.664s 2025-10-08 17:16:01.736 9659 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 790 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+13+42.535796006Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 30.664s 2025-10-08 17:16:01.736 9660 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 6m 30.669s 2025-10-08 17:16:01.741 1745 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 817 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/817
node2 6m 30.670s 2025-10-08 17:16:01.742 9669 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 6m 30.670s 2025-10-08 17:16:01.742 9670 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 817 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/817 {"round":817,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/817/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 30.670s 2025-10-08 17:16:01.742 1746 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 817
node2 6m 30.671s 2025-10-08 17:16:01.743 9671 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/150
node3 6m 30.672s 2025-10-08 17:16:01.744 9455 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 817 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/817
node3 6m 30.673s 2025-10-08 17:16:01.745 9456 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 817
node0 6m 30.687s 2025-10-08 17:16:01.759 9473 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 817 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/817
node0 6m 30.687s 2025-10-08 17:16:01.759 9474 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 817
node3 6m 30.749s 2025-10-08 17:16:01.821 9487 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 817
node3 6m 30.751s 2025-10-08 17:16:01.823 9488 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 817 Timestamp: 2025-10-08T17:16:00.451377174Z Next consensus number: 25326 Legacy running event hash: 59d57536bf8e18f3861eff0ea2750f21f108806d806630a7a50a6255eb9c45555b00cc9713d7966132a479ec68811f3d Legacy running event mnemonic: deny-plug-atom-bunker Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1347831166 Root hash: 0f8fb265a11268fe9276c2283f1ff9b4b8a0bf40061ec06a9812b5dc8a44b98dea341b2c050d38af4a635e0660ad2f21 (root) ConsistencyTestingToolState / vehicle-recall-tape-cabin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 rhythm-this-gasp-shoulder 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -1133748825484110249 /3 cloud-flag-castle-prepare 4 StringLeaf 817 /4 engine-mountain-face-festival
node3 6m 30.759s 2025-10-08 17:16:01.831 9489 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+13+42.625019077Z_seq1_minr473_maxr5473_orgn0.pces
node3 6m 30.759s 2025-10-08 17:16:01.831 9490 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 790 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+13+42.625019077Z_seq1_minr473_maxr5473_orgn0.pces
node3 6m 30.759s 2025-10-08 17:16:01.831 9491 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 6m 30.764s 2025-10-08 17:16:01.836 9505 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for round 817
node3 6m 30.765s 2025-10-08 17:16:01.837 9492 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 6m 30.766s 2025-10-08 17:16:01.838 9509 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 817 Timestamp: 2025-10-08T17:16:00.451377174Z Next consensus number: 25326 Legacy running event hash: 59d57536bf8e18f3861eff0ea2750f21f108806d806630a7a50a6255eb9c45555b00cc9713d7966132a479ec68811f3d Legacy running event mnemonic: deny-plug-atom-bunker Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1347831166 Root hash: 0f8fb265a11268fe9276c2283f1ff9b4b8a0bf40061ec06a9812b5dc8a44b98dea341b2c050d38af4a635e0660ad2f21 (root) ConsistencyTestingToolState / vehicle-recall-tape-cabin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 rhythm-this-gasp-shoulder 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -1133748825484110249 /3 cloud-flag-castle-prepare 4 StringLeaf 817 /4 engine-mountain-face-festival
node3 6m 30.766s 2025-10-08 17:16:01.838 9493 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 817 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/817 {"round":817,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/817/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 6m 30.767s 2025-10-08 17:16:01.839 9494 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/150
node0 6m 30.775s 2025-10-08 17:16:01.847 9510 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+13+42.655220797Z_seq1_minr474_maxr5474_orgn0.pces
node0 6m 30.776s 2025-10-08 17:16:01.848 9511 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 790 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+13+42.655220797Z_seq1_minr474_maxr5474_orgn0.pces
node0 6m 30.776s 2025-10-08 17:16:01.848 9512 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 6m 30.781s 2025-10-08 17:16:01.853 9513 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 6m 30.782s 2025-10-08 17:16:01.854 9514 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 817 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/817 {"round":817,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/817/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 6m 30.783s 2025-10-08 17:16:01.855 9515 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/150
node4 6m 30.791s 2025-10-08 17:16:01.863 1788 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for round 817
node4 6m 30.793s 2025-10-08 17:16:01.865 1789 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 817 Timestamp: 2025-10-08T17:16:00.451377174Z Next consensus number: 25326 Legacy running event hash: 59d57536bf8e18f3861eff0ea2750f21f108806d806630a7a50a6255eb9c45555b00cc9713d7966132a479ec68811f3d Legacy running event mnemonic: deny-plug-atom-bunker Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1347831166 Root hash: 0f8fb265a11268fe9276c2283f1ff9b4b8a0bf40061ec06a9812b5dc8a44b98dea341b2c050d38af4a635e0660ad2f21 (root) ConsistencyTestingToolState / vehicle-recall-tape-cabin 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 rhythm-this-gasp-shoulder 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf -1133748825484110249 /3 cloud-flag-castle-prepare 4 StringLeaf 817 /4 engine-mountain-face-festival
node4 6m 30.803s 2025-10-08 17:16:01.875 1790 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+15+33.658450851Z_seq1_minr726_maxr1226_orgn753.pces Last file: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr364_orgn0.pces
node4 6m 30.804s 2025-10-08 17:16:01.876 1791 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 790 File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+15+33.658450851Z_seq1_minr726_maxr1226_orgn753.pces
node4 6m 30.804s 2025-10-08 17:16:01.876 1792 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 6m 30.808s 2025-10-08 17:16:01.880 1793 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 6m 30.809s 2025-10-08 17:16:01.881 1794 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 817 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/817 {"round":817,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/817/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 30.811s 2025-10-08 17:16:01.883 1795 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1
node2 7m 30.115s 2025-10-08 17:17:01.187 11102 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 945 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 7m 30.116s 2025-10-08 17:17:01.188 10909 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 945 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 7m 30.157s 2025-10-08 17:17:01.229 10913 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 945 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 7m 30.166s 2025-10-08 17:17:01.238 10947 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 945 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 7m 30.183s 2025-10-08 17:17:01.255 3210 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 945 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 7m 30.297s 2025-10-08 17:17:01.369 10950 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 945 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/945
node0 7m 30.298s 2025-10-08 17:17:01.370 10951 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 945
node1 7m 30.306s 2025-10-08 17:17:01.378 10912 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 945 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/945
node1 7m 30.306s 2025-10-08 17:17:01.378 10913 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 945
node0 7m 30.375s 2025-10-08 17:17:01.447 10982 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 945
node0 7m 30.377s 2025-10-08 17:17:01.449 10983 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 945 Timestamp: 2025-10-08T17:17:00.188021047Z Next consensus number: 30105 Legacy running event hash: 5e0f8486ce182cc5b7d2db4b75dbabb607cdb10508250ec16b644c30f31265337867505408e660dd3c034cca705a6eaa Legacy running event mnemonic: sea-hungry-dial-obvious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -30159306 Root hash: abe14c3171c1aed02c7a87f36f2113f4716035024acfdd8faa07c9451260d8471f32c3c9afdeb2292c71e4dd953071e1 (root) ConsistencyTestingToolState / loop-mail-deal-champion 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 claw-round-plate-during 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 7960901440328422004 /3 harsh-swamp-another-shift 4 StringLeaf 945 /4 buzz-hybrid-sea-market
node1 7m 30.383s 2025-10-08 17:17:01.455 10944 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 945
node0 7m 30.384s 2025-10-08 17:17:01.456 10984 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+09+47.236371345Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+13+42.655220797Z_seq1_minr474_maxr5474_orgn0.pces
node0 7m 30.384s 2025-10-08 17:17:01.456 10985 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 918 File: data/saved/preconsensus-events/0/2025/10/08/2025-10-08T17+13+42.655220797Z_seq1_minr474_maxr5474_orgn0.pces
node0 7m 30.384s 2025-10-08 17:17:01.456 10986 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 7m 30.384s 2025-10-08 17:17:01.456 11105 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 945 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/945
node1 7m 30.385s 2025-10-08 17:17:01.457 10945 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 945 Timestamp: 2025-10-08T17:17:00.188021047Z Next consensus number: 30105 Legacy running event hash: 5e0f8486ce182cc5b7d2db4b75dbabb607cdb10508250ec16b644c30f31265337867505408e660dd3c034cca705a6eaa Legacy running event mnemonic: sea-hungry-dial-obvious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -30159306 Root hash: abe14c3171c1aed02c7a87f36f2113f4716035024acfdd8faa07c9451260d8471f32c3c9afdeb2292c71e4dd953071e1 (root) ConsistencyTestingToolState / loop-mail-deal-champion 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 claw-round-plate-during 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 7960901440328422004 /3 harsh-swamp-another-shift 4 StringLeaf 945 /4 buzz-hybrid-sea-market
node2 7m 30.385s 2025-10-08 17:17:01.457 11106 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for round 945
node1 7m 30.392s 2025-10-08 17:17:01.464 10946 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+09+47.352109665Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+13+42.628497894Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 30.392s 2025-10-08 17:17:01.464 10947 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 918 File: data/saved/preconsensus-events/1/2025/10/08/2025-10-08T17+13+42.628497894Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 30.392s 2025-10-08 17:17:01.464 10948 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 7m 30.393s 2025-10-08 17:17:01.465 10987 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 7m 30.394s 2025-10-08 17:17:01.466 10988 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 945 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/945 {"round":945,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/945/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 7m 30.395s 2025-10-08 17:17:01.467 10989 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/275
node4 7m 30.396s 2025-10-08 17:17:01.468 3213 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 945 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/945
node4 7m 30.396s 2025-10-08 17:17:01.468 3214 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 945
node1 7m 30.401s 2025-10-08 17:17:01.473 10949 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 7m 30.402s 2025-10-08 17:17:01.474 10950 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 945 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/945 {"round":945,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/945/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 7m 30.403s 2025-10-08 17:17:01.475 10951 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/275
node2 7m 30.460s 2025-10-08 17:17:01.532 11137 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for round 945
node2 7m 30.462s 2025-10-08 17:17:01.534 11138 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 945 Timestamp: 2025-10-08T17:17:00.188021047Z Next consensus number: 30105 Legacy running event hash: 5e0f8486ce182cc5b7d2db4b75dbabb607cdb10508250ec16b644c30f31265337867505408e660dd3c034cca705a6eaa Legacy running event mnemonic: sea-hungry-dial-obvious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -30159306 Root hash: abe14c3171c1aed02c7a87f36f2113f4716035024acfdd8faa07c9451260d8471f32c3c9afdeb2292c71e4dd953071e1 (root) ConsistencyTestingToolState / loop-mail-deal-champion 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 claw-round-plate-during 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 7960901440328422004 /3 harsh-swamp-another-shift 4 StringLeaf 945 /4 buzz-hybrid-sea-market
node2 7m 30.470s 2025-10-08 17:17:01.542 11147 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+13+42.535796006Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+09+47.471354396Z_seq0_minr1_maxr501_orgn0.pces
node2 7m 30.470s 2025-10-08 17:17:01.542 11148 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 918 File: data/saved/preconsensus-events/2/2025/10/08/2025-10-08T17+13+42.535796006Z_seq1_minr474_maxr5474_orgn0.pces
node2 7m 30.470s 2025-10-08 17:17:01.542 11149 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 7m 30.480s 2025-10-08 17:17:01.552 11150 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 7m 30.481s 2025-10-08 17:17:01.553 11151 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 945 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/945 {"round":945,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/945/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 7m 30.483s 2025-10-08 17:17:01.555 11152 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/275
node3 7m 30.515s 2025-10-08 17:17:01.587 10926 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 945 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/945
node3 7m 30.515s 2025-10-08 17:17:01.587 10927 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 945
node4 7m 30.530s 2025-10-08 17:17:01.602 3248 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for round 945
node4 7m 30.533s 2025-10-08 17:17:01.605 3249 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 945 Timestamp: 2025-10-08T17:17:00.188021047Z Next consensus number: 30105 Legacy running event hash: 5e0f8486ce182cc5b7d2db4b75dbabb607cdb10508250ec16b644c30f31265337867505408e660dd3c034cca705a6eaa Legacy running event mnemonic: sea-hungry-dial-obvious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -30159306 Root hash: abe14c3171c1aed02c7a87f36f2113f4716035024acfdd8faa07c9451260d8471f32c3c9afdeb2292c71e4dd953071e1 (root) ConsistencyTestingToolState / loop-mail-deal-champion 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 claw-round-plate-during 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 7960901440328422004 /3 harsh-swamp-another-shift 4 StringLeaf 945 /4 buzz-hybrid-sea-market
node4 7m 30.541s 2025-10-08 17:17:01.613 3250 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+15+33.658450851Z_seq1_minr726_maxr1226_orgn753.pces Last file: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+09+47.403893333Z_seq0_minr1_maxr364_orgn0.pces
node4 7m 30.541s 2025-10-08 17:17:01.613 3251 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 918 File: data/saved/preconsensus-events/4/2025/10/08/2025-10-08T17+15+33.658450851Z_seq1_minr726_maxr1226_orgn753.pces
node4 7m 30.541s 2025-10-08 17:17:01.613 3252 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 7m 30.546s 2025-10-08 17:17:01.618 3253 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 7m 30.547s 2025-10-08 17:17:01.619 3254 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 945 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/945 {"round":945,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/945/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 7m 30.548s 2025-10-08 17:17:01.620 3255 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/24
node3 7m 30.596s 2025-10-08 17:17:01.668 10961 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> MerkleTreeSnapshotWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for round 945
node3 7m 30.598s 2025-10-08 17:17:01.670 10962 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 945 Timestamp: 2025-10-08T17:17:00.188021047Z Next consensus number: 30105 Legacy running event hash: 5e0f8486ce182cc5b7d2db4b75dbabb607cdb10508250ec16b644c30f31265337867505408e660dd3c034cca705a6eaa Legacy running event mnemonic: sea-hungry-dial-obvious Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -30159306 Root hash: abe14c3171c1aed02c7a87f36f2113f4716035024acfdd8faa07c9451260d8471f32c3c9afdeb2292c71e4dd953071e1 (root) ConsistencyTestingToolState / loop-mail-deal-champion 0 SingletonNode PlatformStateService.PLATFORM_STATE /0 claw-round-plate-during 1 SingletonNode RosterService.ROSTER_STATE /1 tumble-install-behave-ticket 2 VirtualMap RosterService.ROSTERS /2 soap-hamster-shed-draft 3 StringLeaf 7960901440328422004 /3 harsh-swamp-another-shift 4 StringLeaf 945 /4 buzz-hybrid-sea-market
node3 7m 30.605s 2025-10-08 17:17:01.677 10963 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+09+47.613526995Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+13+42.625019077Z_seq1_minr473_maxr5473_orgn0.pces
node3 7m 30.605s 2025-10-08 17:17:01.677 10964 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 918 File: data/saved/preconsensus-events/3/2025/10/08/2025-10-08T17+13+42.625019077Z_seq1_minr473_maxr5473_orgn0.pces
node3 7m 30.606s 2025-10-08 17:17:01.678 10965 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 7m 30.615s 2025-10-08 17:17:01.687 10966 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 7m 30.615s 2025-10-08 17:17:01.687 10967 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 945 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/945 {"round":945,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/945/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 7m 30.617s 2025-10-08 17:17:01.689 10968 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/275