Node ID







Columns











Log Level





Log Marker








Class

















































node4 0.000ns 2025-12-03 15:25:18.450 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 92.000ms 2025-12-03 15:25:18.542 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 108.000ms 2025-12-03 15:25:18.558 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 185.000ms 2025-12-03 15:25:18.635 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 221.000ms 2025-12-03 15:25:18.671 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 249.000ms 2025-12-03 15:25:18.699 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node3 255.000ms 2025-12-03 15:25:18.705 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node0 281.000ms 2025-12-03 15:25:18.731 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node0 298.000ms 2025-12-03 15:25:18.748 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 354.000ms 2025-12-03 15:25:18.804 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node3 374.000ms 2025-12-03 15:25:18.824 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 413.000ms 2025-12-03 15:25:18.863 4 INFO STARTUP <main> Browser: The following nodes [0] are set to run locally
node0 440.000ms 2025-12-03 15:25:18.890 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node1 476.000ms 2025-12-03 15:25:18.926 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node3 489.000ms 2025-12-03 15:25:18.939 4 INFO STARTUP <main> Browser: The following nodes [3] are set to run locally
node3 517.000ms 2025-12-03 15:25:18.967 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node2 552.000ms 2025-12-03 15:25:19.002 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node1 568.000ms 2025-12-03 15:25:19.018 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node1 585.000ms 2025-12-03 15:25:19.035 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 649.000ms 2025-12-03 15:25:19.099 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node2 668.000ms 2025-12-03 15:25:19.118 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 699.000ms 2025-12-03 15:25:19.149 4 INFO STARTUP <main> Browser: The following nodes [1] are set to run locally
node1 726.000ms 2025-12-03 15:25:19.176 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node2 792.000ms 2025-12-03 15:25:19.242 4 INFO STARTUP <main> Browser: The following nodes [2] are set to run locally
node2 822.000ms 2025-12-03 15:25:19.272 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 1.510s 2025-12-03 15:25:19.960 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1261ms
node4 1.520s 2025-12-03 15:25:19.970 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 1.523s 2025-12-03 15:25:19.973 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 1.563s 2025-12-03 15:25:20.013 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 1.611s 2025-12-03 15:25:20.061 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1170ms
node0 1.623s 2025-12-03 15:25:20.073 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 1.626s 2025-12-03 15:25:20.076 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 1.626s 2025-12-03 15:25:20.076 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node0 1.627s 2025-12-03 15:25:20.077 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 1.677s 2025-12-03 15:25:20.127 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 1.743s 2025-12-03 15:25:20.193 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node0 1.744s 2025-12-03 15:25:20.194 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node3 1.774s 2025-12-03 15:25:20.224 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1255ms
node3 1.788s 2025-12-03 15:25:20.238 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node3 1.792s 2025-12-03 15:25:20.242 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 1.843s 2025-12-03 15:25:20.293 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node3 1.907s 2025-12-03 15:25:20.357 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node3 1.907s 2025-12-03 15:25:20.357 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node1 2.219s 2025-12-03 15:25:20.669 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1492ms
node1 2.229s 2025-12-03 15:25:20.679 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node1 2.232s 2025-12-03 15:25:20.682 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 2.271s 2025-12-03 15:25:20.721 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1448ms
node1 2.273s 2025-12-03 15:25:20.723 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node2 2.281s 2025-12-03 15:25:20.731 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node2 2.284s 2025-12-03 15:25:20.734 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 2.320s 2025-12-03 15:25:20.770 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node1 2.338s 2025-12-03 15:25:20.788 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node1 2.339s 2025-12-03 15:25:20.789 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node2 2.394s 2025-12-03 15:25:20.844 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node2 2.396s 2025-12-03 15:25:20.846 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 2.445s 2025-12-03 15:25:20.895 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 2.532s 2025-12-03 15:25:20.982 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 2.534s 2025-12-03 15:25:20.984 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node4 2.571s 2025-12-03 15:25:21.021 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node0 2.582s 2025-12-03 15:25:21.032 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 2.684s 2025-12-03 15:25:21.134 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 2.687s 2025-12-03 15:25:21.137 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node0 2.726s 2025-12-03 15:25:21.176 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 2.752s 2025-12-03 15:25:21.202 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node3 2.851s 2025-12-03 15:25:21.301 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 2.854s 2025-12-03 15:25:21.304 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node3 2.889s 2025-12-03 15:25:21.339 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 3.200s 2025-12-03 15:25:21.650 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node1 3.204s 2025-12-03 15:25:21.654 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node1 3.297s 2025-12-03 15:25:21.747 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 3.299s 2025-12-03 15:25:21.749 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node2 3.301s 2025-12-03 15:25:21.751 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 3.304s 2025-12-03 15:25:21.754 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node1 3.334s 2025-12-03 15:25:21.784 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 3.346s 2025-12-03 15:25:21.796 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 3.349s 2025-12-03 15:25:21.799 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 3.352s 2025-12-03 15:25:21.802 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 3.358s 2025-12-03 15:25:21.808 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node4 3.368s 2025-12-03 15:25:21.818 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 3.371s 2025-12-03 15:25:21.821 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 3.527s 2025-12-03 15:25:21.977 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 3.529s 2025-12-03 15:25:21.979 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node0 3.536s 2025-12-03 15:25:21.986 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node0 3.546s 2025-12-03 15:25:21.996 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 3.549s 2025-12-03 15:25:21.999 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 3.686s 2025-12-03 15:25:22.136 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 3.688s 2025-12-03 15:25:22.138 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node3 3.693s 2025-12-03 15:25:22.143 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node3 3.702s 2025-12-03 15:25:22.152 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 3.704s 2025-12-03 15:25:22.154 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.147s 2025-12-03 15:25:22.597 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.149s 2025-12-03 15:25:22.599 26 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node1 4.156s 2025-12-03 15:25:22.606 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node1 4.166s 2025-12-03 15:25:22.616 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 4.168s 2025-12-03 15:25:22.618 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.176s 2025-12-03 15:25:22.626 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.178s 2025-12-03 15:25:22.628 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node2 4.184s 2025-12-03 15:25:22.634 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node2 4.193s 2025-12-03 15:25:22.643 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 4.196s 2025-12-03 15:25:22.646 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 4.491s 2025-12-03 15:25:22.941 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26147800] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=237420, randomLong=-7069144833788641633, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=12890, randomLong=5712848365011244860, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1342900, data=35, exception=null] OS Health Check Report - Complete (took 1026 ms)
node4 4.524s 2025-12-03 15:25:22.974 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 4.532s 2025-12-03 15:25:22.982 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 4.535s 2025-12-03 15:25:22.985 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 4.627s 2025-12-03 15:25:23.077 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "Iq1XLQ==", "port": 30124 }, { "ipAddressV4": "CoAAJw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "Ih1K7w==", "port": 30125 }, { "ipAddressV4": "CoAAJQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "I96g/w==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkUG8w==", "port": 30127 }, { "ipAddressV4": "CoAAJA==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IodiNg==", "port": 30128 }, { "ipAddressV4": "CoAAKA==", "port": 30128 }] }] }
node4 4.651s 2025-12-03 15:25:23.101 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 4.651s 2025-12-03 15:25:23.101 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node0 4.666s 2025-12-03 15:25:23.116 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26317710] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=181780, randomLong=1621048891313354769, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=13740, randomLong=3688232686123109440, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1160339, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms)
node4 4.667s 2025-12-03 15:25:23.117 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 7f1157959f5eab45004a5688338a3baacfece4fccabeb1e5c23cc0f738db5630d9ffa7021eace62c0b9dfc78e7f05ce2 (root) VirtualMap state / timber-verify-begin-number {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node4 4.670s 2025-12-03 15:25:23.120 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node0 4.701s 2025-12-03 15:25:23.151 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node0 4.709s 2025-12-03 15:25:23.159 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node0 4.711s 2025-12-03 15:25:23.161 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node0 4.801s 2025-12-03 15:25:23.251 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "Iq1XLQ==", "port": 30124 }, { "ipAddressV4": "CoAAJw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "Ih1K7w==", "port": 30125 }, { "ipAddressV4": "CoAAJQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "I96g/w==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkUG8w==", "port": 30127 }, { "ipAddressV4": "CoAAJA==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IodiNg==", "port": 30128 }, { "ipAddressV4": "CoAAKA==", "port": 30128 }] }] }
node3 4.813s 2025-12-03 15:25:23.263 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26321044] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=252050, randomLong=8125120232767466009, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=9740, randomLong=8863033288669374366, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1240550, data=35, exception=null] OS Health Check Report - Complete (took 1022 ms)
node0 4.826s 2025-12-03 15:25:23.276 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv
node0 4.827s 2025-12-03 15:25:23.277 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node0 4.842s 2025-12-03 15:25:23.292 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 7f1157959f5eab45004a5688338a3baacfece4fccabeb1e5c23cc0f738db5630d9ffa7021eace62c0b9dfc78e7f05ce2 (root) VirtualMap state / timber-verify-begin-number {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node3 4.842s 2025-12-03 15:25:23.292 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node0 4.846s 2025-12-03 15:25:23.296 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node3 4.850s 2025-12-03 15:25:23.300 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node3 4.851s 2025-12-03 15:25:23.301 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 4.867s 2025-12-03 15:25:23.317 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node4 4.872s 2025-12-03 15:25:23.322 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node4 4.876s 2025-12-03 15:25:23.326 43 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 4.877s 2025-12-03 15:25:23.327 44 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 4.879s 2025-12-03 15:25:23.329 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 4.882s 2025-12-03 15:25:23.332 46 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 4.883s 2025-12-03 15:25:23.333 47 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 4.883s 2025-12-03 15:25:23.333 48 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 4.885s 2025-12-03 15:25:23.335 49 WARN STARTUP <<start-node-4>> PcesFileTracker: No preconsensus event files available
node4 4.885s 2025-12-03 15:25:23.335 50 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node4 4.887s 2025-12-03 15:25:23.337 51 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node4 4.888s 2025-12-03 15:25:23.338 52 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 4.890s 2025-12-03 15:25:23.340 53 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 164.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 4.894s 2025-12-03 15:25:23.344 54 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node3 4.939s 2025-12-03 15:25:23.389 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "Iq1XLQ==", "port": 30124 }, { "ipAddressV4": "CoAAJw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "Ih1K7w==", "port": 30125 }, { "ipAddressV4": "CoAAJQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "I96g/w==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkUG8w==", "port": 30127 }, { "ipAddressV4": "CoAAJA==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IodiNg==", "port": 30128 }, { "ipAddressV4": "CoAAKA==", "port": 30128 }] }] }
node3 4.962s 2025-12-03 15:25:23.412 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv
node3 4.963s 2025-12-03 15:25:23.413 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node3 4.978s 2025-12-03 15:25:23.428 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 7f1157959f5eab45004a5688338a3baacfece4fccabeb1e5c23cc0f738db5630d9ffa7021eace62c0b9dfc78e7f05ce2 (root) VirtualMap state / timber-verify-begin-number {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node3 4.981s 2025-12-03 15:25:23.431 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node0 5.088s 2025-12-03 15:25:23.538 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node0 5.094s 2025-12-03 15:25:23.544 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node0 5.099s 2025-12-03 15:25:23.549 43 INFO STARTUP <<start-node-0>> ConsistencyTestingToolMain: init called in Main for node 0.
node0 5.100s 2025-12-03 15:25:23.550 44 INFO STARTUP <<start-node-0>> SwirldsPlatform: Starting platform 0
node0 5.101s 2025-12-03 15:25:23.551 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node0 5.104s 2025-12-03 15:25:23.554 46 INFO STARTUP <<start-node-0>> CycleFinder: No cyclical back pressure detected in wiring model.
node0 5.105s 2025-12-03 15:25:23.555 47 INFO STARTUP <<start-node-0>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node0 5.106s 2025-12-03 15:25:23.556 48 INFO STARTUP <<start-node-0>> InputWireChecks: All input wires have been bound.
node0 5.108s 2025-12-03 15:25:23.558 49 WARN STARTUP <<start-node-0>> PcesFileTracker: No preconsensus event files available
node0 5.109s 2025-12-03 15:25:23.559 50 INFO STARTUP <<start-node-0>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node0 5.110s 2025-12-03 15:25:23.560 51 INFO STARTUP <<start-node-0>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node0 5.111s 2025-12-03 15:25:23.561 52 INFO STARTUP <<app: appMain 0>> ConsistencyTestingToolMain: run called in Main.
node0 5.113s 2025-12-03 15:25:23.563 53 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 205.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node0 5.118s 2025-12-03 15:25:23.568 54 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node3 5.176s 2025-12-03 15:25:23.626 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node3 5.181s 2025-12-03 15:25:23.631 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node3 5.186s 2025-12-03 15:25:23.636 43 INFO STARTUP <<start-node-3>> ConsistencyTestingToolMain: init called in Main for node 3.
node3 5.186s 2025-12-03 15:25:23.636 44 INFO STARTUP <<start-node-3>> SwirldsPlatform: Starting platform 3
node3 5.187s 2025-12-03 15:25:23.637 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node3 5.191s 2025-12-03 15:25:23.641 46 INFO STARTUP <<start-node-3>> CycleFinder: No cyclical back pressure detected in wiring model.
node3 5.192s 2025-12-03 15:25:23.642 47 INFO STARTUP <<start-node-3>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node3 5.192s 2025-12-03 15:25:23.642 48 INFO STARTUP <<start-node-3>> InputWireChecks: All input wires have been bound.
node3 5.194s 2025-12-03 15:25:23.644 49 WARN STARTUP <<start-node-3>> PcesFileTracker: No preconsensus event files available
node3 5.194s 2025-12-03 15:25:23.644 50 INFO STARTUP <<start-node-3>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node3 5.196s 2025-12-03 15:25:23.646 51 INFO STARTUP <<start-node-3>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node3 5.197s 2025-12-03 15:25:23.647 52 INFO STARTUP <<app: appMain 3>> ConsistencyTestingToolMain: run called in Main.
node3 5.199s 2025-12-03 15:25:23.649 53 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 164.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node3 5.204s 2025-12-03 15:25:23.654 54 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node1 5.308s 2025-12-03 15:25:23.758 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26142181] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=258060, randomLong=-4262644805972953390, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=8090, randomLong=5970796570086222788, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1314390, data=35, exception=null] OS Health Check Report - Complete (took 1025 ms)
node1 5.339s 2025-12-03 15:25:23.789 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 5.344s 2025-12-03 15:25:23.794 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26142971] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=264009, randomLong=3635323156934469678, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=10450, randomLong=-3867129321559844761, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1486085, data=35, exception=null] OS Health Check Report - Complete (took 1028 ms)
node1 5.347s 2025-12-03 15:25:23.797 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node1 5.349s 2025-12-03 15:25:23.799 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node2 5.379s 2025-12-03 15:25:23.829 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 5.388s 2025-12-03 15:25:23.838 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 5.390s 2025-12-03 15:25:23.840 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node1 5.438s 2025-12-03 15:25:23.888 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "Iq1XLQ==", "port": 30124 }, { "ipAddressV4": "CoAAJw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "Ih1K7w==", "port": 30125 }, { "ipAddressV4": "CoAAJQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "I96g/w==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkUG8w==", "port": 30127 }, { "ipAddressV4": "CoAAJA==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IodiNg==", "port": 30128 }, { "ipAddressV4": "CoAAKA==", "port": 30128 }] }] }
node1 5.461s 2025-12-03 15:25:23.911 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv
node1 5.462s 2025-12-03 15:25:23.912 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node1 5.478s 2025-12-03 15:25:23.928 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 7f1157959f5eab45004a5688338a3baacfece4fccabeb1e5c23cc0f738db5630d9ffa7021eace62c0b9dfc78e7f05ce2 (root) VirtualMap state / timber-verify-begin-number {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node1 5.481s 2025-12-03 15:25:23.931 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node2 5.490s 2025-12-03 15:25:23.940 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "Iq1XLQ==", "port": 30124 }, { "ipAddressV4": "CoAAJw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "Ih1K7w==", "port": 30125 }, { "ipAddressV4": "CoAAJQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "I96g/w==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkUG8w==", "port": 30127 }, { "ipAddressV4": "CoAAJA==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IodiNg==", "port": 30128 }, { "ipAddressV4": "CoAAKA==", "port": 30128 }] }] }
node2 5.517s 2025-12-03 15:25:23.967 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv
node2 5.518s 2025-12-03 15:25:23.968 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node2 5.535s 2025-12-03 15:25:23.985 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 7f1157959f5eab45004a5688338a3baacfece4fccabeb1e5c23cc0f738db5630d9ffa7021eace62c0b9dfc78e7f05ce2 (root) VirtualMap state / timber-verify-begin-number {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node2 5.538s 2025-12-03 15:25:23.988 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node1 5.717s 2025-12-03 15:25:24.167 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node1 5.723s 2025-12-03 15:25:24.173 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node1 5.728s 2025-12-03 15:25:24.178 43 INFO STARTUP <<start-node-1>> ConsistencyTestingToolMain: init called in Main for node 1.
node1 5.729s 2025-12-03 15:25:24.179 44 INFO STARTUP <<start-node-1>> SwirldsPlatform: Starting platform 1
node1 5.730s 2025-12-03 15:25:24.180 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node1 5.734s 2025-12-03 15:25:24.184 46 INFO STARTUP <<start-node-1>> CycleFinder: No cyclical back pressure detected in wiring model.
node1 5.735s 2025-12-03 15:25:24.185 47 INFO STARTUP <<start-node-1>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node1 5.735s 2025-12-03 15:25:24.185 48 INFO STARTUP <<start-node-1>> InputWireChecks: All input wires have been bound.
node1 5.737s 2025-12-03 15:25:24.187 49 WARN STARTUP <<start-node-1>> PcesFileTracker: No preconsensus event files available
node1 5.737s 2025-12-03 15:25:24.187 50 INFO STARTUP <<start-node-1>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node1 5.739s 2025-12-03 15:25:24.189 51 INFO STARTUP <<start-node-1>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node1 5.740s 2025-12-03 15:25:24.190 52 INFO STARTUP <<app: appMain 1>> ConsistencyTestingToolMain: run called in Main.
node1 5.744s 2025-12-03 15:25:24.194 53 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 208.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node1 5.751s 2025-12-03 15:25:24.201 54 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node2 5.758s 2025-12-03 15:25:24.208 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node2 5.764s 2025-12-03 15:25:24.214 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node2 5.769s 2025-12-03 15:25:24.219 43 INFO STARTUP <<start-node-2>> ConsistencyTestingToolMain: init called in Main for node 2.
node2 5.770s 2025-12-03 15:25:24.220 44 INFO STARTUP <<start-node-2>> SwirldsPlatform: Starting platform 2
node2 5.771s 2025-12-03 15:25:24.221 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node2 5.775s 2025-12-03 15:25:24.225 46 INFO STARTUP <<start-node-2>> CycleFinder: No cyclical back pressure detected in wiring model.
node2 5.777s 2025-12-03 15:25:24.227 47 INFO STARTUP <<start-node-2>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node2 5.777s 2025-12-03 15:25:24.227 48 INFO STARTUP <<start-node-2>> InputWireChecks: All input wires have been bound.
node2 5.779s 2025-12-03 15:25:24.229 49 WARN STARTUP <<start-node-2>> PcesFileTracker: No preconsensus event files available
node2 5.780s 2025-12-03 15:25:24.230 50 INFO STARTUP <<start-node-2>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node2 5.782s 2025-12-03 15:25:24.232 51 INFO STARTUP <<start-node-2>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node2 5.783s 2025-12-03 15:25:24.233 52 INFO STARTUP <<app: appMain 2>> ConsistencyTestingToolMain: run called in Main.
node2 5.785s 2025-12-03 15:25:24.235 53 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 187.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node2 5.790s 2025-12-03 15:25:24.240 54 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 7.889s 2025-12-03 15:25:26.339 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 7.893s 2025-12-03 15:25:26.343 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node0 8.112s 2025-12-03 15:25:26.562 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ]
node0 8.115s 2025-12-03 15:25:26.565 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 8.201s 2025-12-03 15:25:26.651 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ]
node3 8.205s 2025-12-03 15:25:26.655 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node1 8.741s 2025-12-03 15:25:27.191 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ]
node1 8.744s 2025-12-03 15:25:27.194 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node2 8.781s 2025-12-03 15:25:27.231 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ]
node2 8.783s 2025-12-03 15:25:27.233 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 14.985s 2025-12-03 15:25:33.435 57 INFO PLATFORM_STATUS <platformForkJoinThread-7> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node0 15.207s 2025-12-03 15:25:33.657 57 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node3 15.294s 2025-12-03 15:25:33.744 57 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 15.837s 2025-12-03 15:25:34.287 57 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node2 15.880s 2025-12-03 15:25:34.330 57 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node3 16.588s 2025-12-03 15:25:35.038 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node0 16.653s 2025-12-03 15:25:35.103 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node1 16.660s 2025-12-03 15:25:35.110 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node2 16.667s 2025-12-03 15:25:35.117 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node4 16.690s 2025-12-03 15:25:35.140 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node3 17.027s 2025-12-03 15:25:35.477 59 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 1.7 s in CHECKING. Now in ACTIVE
node3 17.028s 2025-12-03 15:25:35.478 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node1 17.079s 2025-12-03 15:25:35.529 59 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 1.2 s in CHECKING. Now in ACTIVE
node1 17.081s 2025-12-03 15:25:35.531 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node2 17.102s 2025-12-03 15:25:35.552 59 INFO PLATFORM_STATUS <platformForkJoinThread-7> StatusStateMachine: Platform spent 1.2 s in CHECKING. Now in ACTIVE
node0 17.104s 2025-12-03 15:25:35.554 59 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 1.9 s in CHECKING. Now in ACTIVE
node2 17.106s 2025-12-03 15:25:35.556 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node0 17.107s 2025-12-03 15:25:35.557 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node4 17.190s 2025-12-03 15:25:35.640 59 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 2.2 s in CHECKING. Now in ACTIVE
node4 17.193s 2025-12-03 15:25:35.643 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node2 17.229s 2025-12-03 15:25:35.679 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2
node2 17.231s 2025-12-03 15:25:35.681 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node3 17.290s 2025-12-03 15:25:35.740 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2
node3 17.292s 2025-12-03 15:25:35.742 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node4 17.330s 2025-12-03 15:25:35.780 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node4 17.332s 2025-12-03 15:25:35.782 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node1 17.369s 2025-12-03 15:25:35.819 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2
node1 17.370s 2025-12-03 15:25:35.820 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node0 17.424s 2025-12-03 15:25:35.874 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2
node0 17.425s 2025-12-03 15:25:35.875 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node2 17.472s 2025-12-03 15:25:35.922 109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node2 17.476s 2025-12-03 15:25:35.926 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-03T15:25:34.616987Z Next consensus number: 14 Legacy running event hash: 86df8ff934f3b4e2e187107ef8d73fd75cc8272e6599a35c69f68a7f3fb4290c24e805e1b7ac0794f5b30187bceb53c6 Legacy running event mnemonic: category-abstract-oyster-glow Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: f4ef64e0c99203c8573dbfb96026b4a42bbdf09f57800a1671eb3220a115b4acee1e926b05b67472cb6388d9b0389ef8 (root) VirtualMap state / giraffe-orchard-soccer-jungle {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"decade-throw-blossom-pen"}}}
node2 17.518s 2025-12-03 15:25:35.968 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 17.519s 2025-12-03 15:25:35.969 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 17.519s 2025-12-03 15:25:35.969 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 17.520s 2025-12-03 15:25:35.970 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 17.527s 2025-12-03 15:25:35.977 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 17.543s 2025-12-03 15:25:35.993 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node3 17.546s 2025-12-03 15:25:35.996 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-03T15:25:34.616987Z Next consensus number: 14 Legacy running event hash: 86df8ff934f3b4e2e187107ef8d73fd75cc8272e6599a35c69f68a7f3fb4290c24e805e1b7ac0794f5b30187bceb53c6 Legacy running event mnemonic: category-abstract-oyster-glow Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: f4ef64e0c99203c8573dbfb96026b4a42bbdf09f57800a1671eb3220a115b4acee1e926b05b67472cb6388d9b0389ef8 (root) VirtualMap state / giraffe-orchard-soccer-jungle {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"decade-throw-blossom-pen"}}}
node4 17.579s 2025-12-03 15:25:36.029 107 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node4 17.582s 2025-12-03 15:25:36.032 108 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-03T15:25:34.616987Z Next consensus number: 14 Legacy running event hash: 86df8ff934f3b4e2e187107ef8d73fd75cc8272e6599a35c69f68a7f3fb4290c24e805e1b7ac0794f5b30187bceb53c6 Legacy running event mnemonic: category-abstract-oyster-glow Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: f4ef64e0c99203c8573dbfb96026b4a42bbdf09f57800a1671eb3220a115b4acee1e926b05b67472cb6388d9b0389ef8 (root) VirtualMap state / giraffe-orchard-soccer-jungle {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"decade-throw-blossom-pen"}}}
node3 17.586s 2025-12-03 15:25:36.036 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 17.587s 2025-12-03 15:25:36.037 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 17.588s 2025-12-03 15:25:36.038 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 17.589s 2025-12-03 15:25:36.039 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 17.595s 2025-12-03 15:25:36.045 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 17.623s 2025-12-03 15:25:36.073 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node1 17.626s 2025-12-03 15:25:36.076 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-03T15:25:34.616987Z Next consensus number: 14 Legacy running event hash: 86df8ff934f3b4e2e187107ef8d73fd75cc8272e6599a35c69f68a7f3fb4290c24e805e1b7ac0794f5b30187bceb53c6 Legacy running event mnemonic: category-abstract-oyster-glow Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: f4ef64e0c99203c8573dbfb96026b4a42bbdf09f57800a1671eb3220a115b4acee1e926b05b67472cb6388d9b0389ef8 (root) VirtualMap state / giraffe-orchard-soccer-jungle {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"decade-throw-blossom-pen"}}}
node4 17.626s 2025-12-03 15:25:36.076 109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces
node4 17.626s 2025-12-03 15:25:36.076 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces
node4 17.627s 2025-12-03 15:25:36.077 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 17.628s 2025-12-03 15:25:36.078 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 17.634s 2025-12-03 15:25:36.084 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 17.667s 2025-12-03 15:25:36.117 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 17.668s 2025-12-03 15:25:36.118 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 17.668s 2025-12-03 15:25:36.118 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 17.669s 2025-12-03 15:25:36.119 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 17.675s 2025-12-03 15:25:36.125 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 17.683s 2025-12-03 15:25:36.133 108 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node0 17.686s 2025-12-03 15:25:36.136 109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-03T15:25:34.616987Z Next consensus number: 14 Legacy running event hash: 86df8ff934f3b4e2e187107ef8d73fd75cc8272e6599a35c69f68a7f3fb4290c24e805e1b7ac0794f5b30187bceb53c6 Legacy running event mnemonic: category-abstract-oyster-glow Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: f4ef64e0c99203c8573dbfb96026b4a42bbdf09f57800a1671eb3220a115b4acee1e926b05b67472cb6388d9b0389ef8 (root) VirtualMap state / giraffe-orchard-soccer-jungle {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"decade-throw-blossom-pen"}}}
node0 17.725s 2025-12-03 15:25:36.175 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 17.726s 2025-12-03 15:25:36.176 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 17.726s 2025-12-03 15:25:36.176 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 17.727s 2025-12-03 15:25:36.177 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 17.733s 2025-12-03 15:25:36.183 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 42.633s 2025-12-03 15:26:01.083 713 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 59 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 42.689s 2025-12-03 15:26:01.139 703 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 59 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 42.716s 2025-12-03 15:26:01.166 706 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 59 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 42.731s 2025-12-03 15:26:01.181 700 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 59 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 42.732s 2025-12-03 15:26:01.182 729 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 59 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 42.895s 2025-12-03 15:26:01.345 716 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 59 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/59
node2 42.895s 2025-12-03 15:26:01.345 709 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 59 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/59
node1 42.896s 2025-12-03 15:26:01.346 717 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node2 42.896s 2025-12-03 15:26:01.346 710 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node3 42.962s 2025-12-03 15:26:01.412 706 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 59 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/59
node3 42.963s 2025-12-03 15:26:01.413 707 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node4 42.967s 2025-12-03 15:26:01.417 703 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 59 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/59
node4 42.968s 2025-12-03 15:26:01.418 704 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node1 42.975s 2025-12-03 15:26:01.425 756 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node1 42.978s 2025-12-03 15:26:01.428 757 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 59 Timestamp: 2025-12-03T15:26:00.037535840Z Next consensus number: 2055 Legacy running event hash: b78f33d6866200d4f5090cf1edadbd75a8ab282b9cad20881208f307c97ffb1fa4d5d1268f4879d58a9a6ca8981216c2 Legacy running event mnemonic: athlete-kiss-critic-box Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1110253557 Root hash: bab4ecc94ed86a27613f4881badbb0184795d6dc6b41689c258dc7a90888c61dfb82d6e5cc3af7e5cfe0837a8323a1dd (root) VirtualMap state / praise-group-swallow-eager {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"bacon-ginger-chuckle-action"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"club-slam-slogan-attack"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"frozen-lion-mushroom-vendor"}}}
node2 42.982s 2025-12-03 15:26:01.432 749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node2 42.985s 2025-12-03 15:26:01.435 750 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 59 Timestamp: 2025-12-03T15:26:00.037535840Z Next consensus number: 2055 Legacy running event hash: b78f33d6866200d4f5090cf1edadbd75a8ab282b9cad20881208f307c97ffb1fa4d5d1268f4879d58a9a6ca8981216c2 Legacy running event mnemonic: athlete-kiss-critic-box Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1110253557 Root hash: bab4ecc94ed86a27613f4881badbb0184795d6dc6b41689c258dc7a90888c61dfb82d6e5cc3af7e5cfe0837a8323a1dd (root) VirtualMap state / praise-group-swallow-eager {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"bacon-ginger-chuckle-action"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"club-slam-slogan-attack"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"frozen-lion-mushroom-vendor"}}}
node1 42.987s 2025-12-03 15:26:01.437 758 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 42.988s 2025-12-03 15:26:01.438 759 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 31 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 42.988s 2025-12-03 15:26:01.438 760 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 42.990s 2025-12-03 15:26:01.440 761 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 42.991s 2025-12-03 15:26:01.441 762 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 59 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/59 {"round":59,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/59/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 42.993s 2025-12-03 15:26:01.443 751 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 42.994s 2025-12-03 15:26:01.444 752 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 31 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 42.995s 2025-12-03 15:26:01.445 753 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 42.996s 2025-12-03 15:26:01.446 754 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 42.997s 2025-12-03 15:26:01.447 755 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 59 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/59 {"round":59,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/59/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 43.038s 2025-12-03 15:26:01.488 732 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 59 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/59
node0 43.039s 2025-12-03 15:26:01.489 733 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node3 43.044s 2025-12-03 15:26:01.494 746 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node3 43.046s 2025-12-03 15:26:01.496 747 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 59 Timestamp: 2025-12-03T15:26:00.037535840Z Next consensus number: 2055 Legacy running event hash: b78f33d6866200d4f5090cf1edadbd75a8ab282b9cad20881208f307c97ffb1fa4d5d1268f4879d58a9a6ca8981216c2 Legacy running event mnemonic: athlete-kiss-critic-box Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1110253557 Root hash: bab4ecc94ed86a27613f4881badbb0184795d6dc6b41689c258dc7a90888c61dfb82d6e5cc3af7e5cfe0837a8323a1dd (root) VirtualMap state / praise-group-swallow-eager {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"bacon-ginger-chuckle-action"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"club-slam-slogan-attack"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"frozen-lion-mushroom-vendor"}}}
node4 43.054s 2025-12-03 15:26:01.504 735 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node3 43.055s 2025-12-03 15:26:01.505 748 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 43.056s 2025-12-03 15:26:01.506 749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 31 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 43.056s 2025-12-03 15:26:01.506 750 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 43.057s 2025-12-03 15:26:01.507 736 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 59 Timestamp: 2025-12-03T15:26:00.037535840Z Next consensus number: 2055 Legacy running event hash: b78f33d6866200d4f5090cf1edadbd75a8ab282b9cad20881208f307c97ffb1fa4d5d1268f4879d58a9a6ca8981216c2 Legacy running event mnemonic: athlete-kiss-critic-box Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1110253557 Root hash: bab4ecc94ed86a27613f4881badbb0184795d6dc6b41689c258dc7a90888c61dfb82d6e5cc3af7e5cfe0837a8323a1dd (root) VirtualMap state / praise-group-swallow-eager {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"bacon-ginger-chuckle-action"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"club-slam-slogan-attack"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"frozen-lion-mushroom-vendor"}}}
node3 43.058s 2025-12-03 15:26:01.508 751 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 43.059s 2025-12-03 15:26:01.509 752 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 59 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/59 {"round":59,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/59/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 43.065s 2025-12-03 15:26:01.515 737 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces
node4 43.066s 2025-12-03 15:26:01.516 738 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 31 File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces
node4 43.067s 2025-12-03 15:26:01.517 739 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 43.068s 2025-12-03 15:26:01.518 740 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 43.069s 2025-12-03 15:26:01.519 741 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 59 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/59 {"round":59,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/59/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 43.122s 2025-12-03 15:26:01.572 772 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 59
node0 43.124s 2025-12-03 15:26:01.574 773 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 59 Timestamp: 2025-12-03T15:26:00.037535840Z Next consensus number: 2055 Legacy running event hash: b78f33d6866200d4f5090cf1edadbd75a8ab282b9cad20881208f307c97ffb1fa4d5d1268f4879d58a9a6ca8981216c2 Legacy running event mnemonic: athlete-kiss-critic-box Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1110253557 Root hash: bab4ecc94ed86a27613f4881badbb0184795d6dc6b41689c258dc7a90888c61dfb82d6e5cc3af7e5cfe0837a8323a1dd (root) VirtualMap state / praise-group-swallow-eager {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"bacon-ginger-chuckle-action"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"club-slam-slogan-attack"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"frozen-lion-mushroom-vendor"}}}
node0 43.132s 2025-12-03 15:26:01.582 774 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 43.133s 2025-12-03 15:26:01.583 775 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 31 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 43.133s 2025-12-03 15:26:01.583 776 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 43.135s 2025-12-03 15:26:01.585 777 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 43.135s 2025-12-03 15:26:01.585 778 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 59 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/59 {"round":59,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/59/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 42.682s 2025-12-03 15:27:01.132 2186 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 187 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 1m 42.767s 2025-12-03 15:27:01.217 2124 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 187 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 1m 42.795s 2025-12-03 15:27:01.245 2138 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 187 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 42.829s 2025-12-03 15:27:01.279 2165 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 187 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 1m 42.840s 2025-12-03 15:27:01.290 2159 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 187 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 42.965s 2025-12-03 15:27:01.415 2168 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 187 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/187
node3 1m 42.965s 2025-12-03 15:27:01.415 2169 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node1 1m 42.986s 2025-12-03 15:27:01.436 2162 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 187 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/187
node1 1m 42.987s 2025-12-03 15:27:01.437 2163 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node3 1m 43.048s 2025-12-03 15:27:01.498 2216 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node3 1m 43.050s 2025-12-03 15:27:01.500 2217 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 187 Timestamp: 2025-12-03T15:27:00.215555Z Next consensus number: 6837 Legacy running event hash: 59455fe970820d362420eaa7c2df600467a978a2e246bc0a3fbd61775c00ee30755dfe8df03bf8c9f1050a793d26191f Legacy running event mnemonic: chunk-wood-aware-plug Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1633438068 Root hash: e41203714ea93d645496efdc4ec2b238ffcf892cafe00b9a8982ac20061be713d78dfd8a7fd5f8dca2eca955bfc530a4 (root) VirtualMap state / lava-hotel-steel-prosper {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"hood-trash-pretty-clown"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"reason-verify-betray-version"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"quick-border-twice-iron"}}}
node2 1m 43.056s 2025-12-03 15:27:01.506 2151 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 187 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/187
node0 1m 43.057s 2025-12-03 15:27:01.507 2190 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 187 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/187
node2 1m 43.057s 2025-12-03 15:27:01.507 2152 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node0 1m 43.058s 2025-12-03 15:27:01.508 2191 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node3 1m 43.058s 2025-12-03 15:27:01.508 2218 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 43.058s 2025-12-03 15:27:01.508 2219 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 160 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 43.058s 2025-12-03 15:27:01.508 2220 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 1m 43.063s 2025-12-03 15:27:01.513 2221 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 1m 43.064s 2025-12-03 15:27:01.514 2222 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 187 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/187 {"round":187,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/187/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 1m 43.073s 2025-12-03 15:27:01.523 2196 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node1 1m 43.075s 2025-12-03 15:27:01.525 2197 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 187 Timestamp: 2025-12-03T15:27:00.215555Z Next consensus number: 6837 Legacy running event hash: 59455fe970820d362420eaa7c2df600467a978a2e246bc0a3fbd61775c00ee30755dfe8df03bf8c9f1050a793d26191f Legacy running event mnemonic: chunk-wood-aware-plug Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1633438068 Root hash: e41203714ea93d645496efdc4ec2b238ffcf892cafe00b9a8982ac20061be713d78dfd8a7fd5f8dca2eca955bfc530a4 (root) VirtualMap state / lava-hotel-steel-prosper {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"hood-trash-pretty-clown"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"reason-verify-betray-version"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"quick-border-twice-iron"}}}
node4 1m 43.075s 2025-12-03 15:27:01.525 2127 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 187 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/187
node4 1m 43.076s 2025-12-03 15:27:01.526 2128 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node1 1m 43.085s 2025-12-03 15:27:01.535 2198 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 43.085s 2025-12-03 15:27:01.535 2199 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 160 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 43.085s 2025-12-03 15:27:01.535 2200 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 1m 43.090s 2025-12-03 15:27:01.540 2201 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 1m 43.091s 2025-12-03 15:27:01.541 2202 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 187 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/187 {"round":187,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/187/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 43.148s 2025-12-03 15:27:01.598 2243 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node0 1m 43.150s 2025-12-03 15:27:01.600 2244 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 187 Timestamp: 2025-12-03T15:27:00.215555Z Next consensus number: 6837 Legacy running event hash: 59455fe970820d362420eaa7c2df600467a978a2e246bc0a3fbd61775c00ee30755dfe8df03bf8c9f1050a793d26191f Legacy running event mnemonic: chunk-wood-aware-plug Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1633438068 Root hash: e41203714ea93d645496efdc4ec2b238ffcf892cafe00b9a8982ac20061be713d78dfd8a7fd5f8dca2eca955bfc530a4 (root) VirtualMap state / lava-hotel-steel-prosper {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"hood-trash-pretty-clown"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"reason-verify-betray-version"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"quick-border-twice-iron"}}}
node2 1m 43.150s 2025-12-03 15:27:01.600 2183 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node2 1m 43.152s 2025-12-03 15:27:01.602 2184 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 187 Timestamp: 2025-12-03T15:27:00.215555Z Next consensus number: 6837 Legacy running event hash: 59455fe970820d362420eaa7c2df600467a978a2e246bc0a3fbd61775c00ee30755dfe8df03bf8c9f1050a793d26191f Legacy running event mnemonic: chunk-wood-aware-plug Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1633438068 Root hash: e41203714ea93d645496efdc4ec2b238ffcf892cafe00b9a8982ac20061be713d78dfd8a7fd5f8dca2eca955bfc530a4 (root) VirtualMap state / lava-hotel-steel-prosper {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"hood-trash-pretty-clown"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"reason-verify-betray-version"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"quick-border-twice-iron"}}}
node0 1m 43.158s 2025-12-03 15:27:01.608 2245 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 43.158s 2025-12-03 15:27:01.608 2246 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 160 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 43.159s 2025-12-03 15:27:01.609 2247 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 1m 43.160s 2025-12-03 15:27:01.610 2185 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 43.160s 2025-12-03 15:27:01.610 2186 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 160 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 43.160s 2025-12-03 15:27:01.610 2187 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 1m 43.163s 2025-12-03 15:27:01.613 2177 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 187
node0 1m 43.164s 2025-12-03 15:27:01.614 2248 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 1m 43.164s 2025-12-03 15:27:01.614 2249 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 187 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/187 {"round":187,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/187/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 1m 43.166s 2025-12-03 15:27:01.616 2188 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 1m 43.166s 2025-12-03 15:27:01.616 2189 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 187 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/187 {"round":187,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/187/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 1m 43.166s 2025-12-03 15:27:01.616 2178 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 187 Timestamp: 2025-12-03T15:27:00.215555Z Next consensus number: 6837 Legacy running event hash: 59455fe970820d362420eaa7c2df600467a978a2e246bc0a3fbd61775c00ee30755dfe8df03bf8c9f1050a793d26191f Legacy running event mnemonic: chunk-wood-aware-plug Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1633438068 Root hash: e41203714ea93d645496efdc4ec2b238ffcf892cafe00b9a8982ac20061be713d78dfd8a7fd5f8dca2eca955bfc530a4 (root) VirtualMap state / lava-hotel-steel-prosper {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"hood-trash-pretty-clown"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"reason-verify-betray-version"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"quick-border-twice-iron"}}}
node4 1m 43.173s 2025-12-03 15:27:01.623 2179 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 43.174s 2025-12-03 15:27:01.624 2180 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 160 File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 43.174s 2025-12-03 15:27:01.624 2181 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 1m 43.179s 2025-12-03 15:27:01.629 2182 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 1m 43.179s 2025-12-03 15:27:01.629 2183 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 187 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/187 {"round":187,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/187/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 2m 42.632s 2025-12-03 15:28:01.082 3723 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 323 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 2m 42.692s 2025-12-03 15:28:01.142 3711 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 323 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 2m 42.708s 2025-12-03 15:28:01.158 3745 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 323 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 2m 42.744s 2025-12-03 15:28:01.194 3718 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 323 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 2m 42.770s 2025-12-03 15:28:01.220 3686 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 323 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 2m 42.856s 2025-12-03 15:28:01.306 3721 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 323 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/323
node2 2m 42.857s 2025-12-03 15:28:01.307 3722 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node1 2m 42.887s 2025-12-03 15:28:01.337 3714 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 323 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/323
node1 2m 42.888s 2025-12-03 15:28:01.338 3715 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node3 2m 42.896s 2025-12-03 15:28:01.346 3748 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 323 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/323
node3 2m 42.896s 2025-12-03 15:28:01.346 3749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node4 2m 42.927s 2025-12-03 15:28:01.377 3689 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 323 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/323
node4 2m 42.928s 2025-12-03 15:28:01.378 3690 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node2 2m 42.944s 2025-12-03 15:28:01.394 3753 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node2 2m 42.946s 2025-12-03 15:28:01.396 3754 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 323 Timestamp: 2025-12-03T15:28:00.258360Z Next consensus number: 11644 Legacy running event hash: 55cfb55247c6acfbd1bc8a6729eb9fb325f5d154057948073611378a055f0acc9a3628d969af4c484df8c6ca13279f1a Legacy running event mnemonic: rigid-fiction-web-gaze Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1252127449 Root hash: 4d2c94715453d4afa986e8cf535438cd19cd0b464c182b99f818e044cea0db8c1fcfb486857a7e4296c84b6ea3d86171 (root) VirtualMap state / pitch-impose-media-health {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"walnut-change-close-soap"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tired-funny-behave-obvious"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"issue-arrow-make-code"}}}
node2 2m 42.955s 2025-12-03 15:28:01.405 3755 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 42.955s 2025-12-03 15:28:01.405 3756 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 296 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 42.955s 2025-12-03 15:28:01.405 3757 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 2m 42.964s 2025-12-03 15:28:01.414 3758 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 2m 42.964s 2025-12-03 15:28:01.414 3759 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 323 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/323 {"round":323,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/323/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 2m 42.979s 2025-12-03 15:28:01.429 3754 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node1 2m 42.982s 2025-12-03 15:28:01.432 3755 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 323 Timestamp: 2025-12-03T15:28:00.258360Z Next consensus number: 11644 Legacy running event hash: 55cfb55247c6acfbd1bc8a6729eb9fb325f5d154057948073611378a055f0acc9a3628d969af4c484df8c6ca13279f1a Legacy running event mnemonic: rigid-fiction-web-gaze Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1252127449 Root hash: 4d2c94715453d4afa986e8cf535438cd19cd0b464c182b99f818e044cea0db8c1fcfb486857a7e4296c84b6ea3d86171 (root) VirtualMap state / pitch-impose-media-health {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"walnut-change-close-soap"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tired-funny-behave-obvious"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"issue-arrow-make-code"}}}
node3 2m 42.985s 2025-12-03 15:28:01.435 3780 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node3 2m 42.987s 2025-12-03 15:28:01.437 3781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 323 Timestamp: 2025-12-03T15:28:00.258360Z Next consensus number: 11644 Legacy running event hash: 55cfb55247c6acfbd1bc8a6729eb9fb325f5d154057948073611378a055f0acc9a3628d969af4c484df8c6ca13279f1a Legacy running event mnemonic: rigid-fiction-web-gaze Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1252127449 Root hash: 4d2c94715453d4afa986e8cf535438cd19cd0b464c182b99f818e044cea0db8c1fcfb486857a7e4296c84b6ea3d86171 (root) VirtualMap state / pitch-impose-media-health {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"walnut-change-close-soap"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tired-funny-behave-obvious"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"issue-arrow-make-code"}}}
node1 2m 42.990s 2025-12-03 15:28:01.440 3756 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 42.990s 2025-12-03 15:28:01.440 3757 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 296 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 42.990s 2025-12-03 15:28:01.440 3758 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 2m 42.994s 2025-12-03 15:28:01.444 3790 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 42.995s 2025-12-03 15:28:01.445 3791 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 296 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 42.995s 2025-12-03 15:28:01.445 3792 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 2m 42.997s 2025-12-03 15:28:01.447 3742 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 323 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/323
node0 2m 42.998s 2025-12-03 15:28:01.448 3743 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node1 2m 42.999s 2025-12-03 15:28:01.449 3759 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 2m 42.999s 2025-12-03 15:28:01.449 3760 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 323 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/323 {"round":323,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/323/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 2m 43.003s 2025-12-03 15:28:01.453 3793 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 2m 43.004s 2025-12-03 15:28:01.454 3794 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 323 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/323 {"round":323,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/323/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 43.014s 2025-12-03 15:28:01.464 3721 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node4 2m 43.017s 2025-12-03 15:28:01.467 3722 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 323 Timestamp: 2025-12-03T15:28:00.258360Z Next consensus number: 11644 Legacy running event hash: 55cfb55247c6acfbd1bc8a6729eb9fb325f5d154057948073611378a055f0acc9a3628d969af4c484df8c6ca13279f1a Legacy running event mnemonic: rigid-fiction-web-gaze Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1252127449 Root hash: 4d2c94715453d4afa986e8cf535438cd19cd0b464c182b99f818e044cea0db8c1fcfb486857a7e4296c84b6ea3d86171 (root) VirtualMap state / pitch-impose-media-health {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"walnut-change-close-soap"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tired-funny-behave-obvious"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"issue-arrow-make-code"}}}
node4 2m 43.025s 2025-12-03 15:28:01.475 3723 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 43.025s 2025-12-03 15:28:01.475 3724 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 296 File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 43.025s 2025-12-03 15:28:01.475 3725 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 2m 43.033s 2025-12-03 15:28:01.483 3726 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 2m 43.034s 2025-12-03 15:28:01.484 3727 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 323 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/323 {"round":323,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/323/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 2m 43.082s 2025-12-03 15:28:01.532 3777 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 323
node0 2m 43.084s 2025-12-03 15:28:01.534 3778 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 323 Timestamp: 2025-12-03T15:28:00.258360Z Next consensus number: 11644 Legacy running event hash: 55cfb55247c6acfbd1bc8a6729eb9fb325f5d154057948073611378a055f0acc9a3628d969af4c484df8c6ca13279f1a Legacy running event mnemonic: rigid-fiction-web-gaze Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1252127449 Root hash: 4d2c94715453d4afa986e8cf535438cd19cd0b464c182b99f818e044cea0db8c1fcfb486857a7e4296c84b6ea3d86171 (root) VirtualMap state / pitch-impose-media-health {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"walnut-change-close-soap"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tired-funny-behave-obvious"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"issue-arrow-make-code"}}}
node0 2m 43.090s 2025-12-03 15:28:01.540 3779 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 43.091s 2025-12-03 15:28:01.541 3780 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 296 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 43.091s 2025-12-03 15:28:01.541 3781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 2m 43.099s 2025-12-03 15:28:01.549 3782 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 2m 43.100s 2025-12-03 15:28:01.550 3783 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 323 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/323 {"round":323,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/323/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 3m 13.416s 2025-12-03 15:28:31.866 4508 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 0 to 4>> NetworkUtils: Connection broken: 0 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-03T15:28:31.863731321Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node3 3m 13.416s 2025-12-03 15:28:31.866 4498 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 3 to 4>> NetworkUtils: Connection broken: 3 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-03T15:28:31.864277783Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node1 3m 13.418s 2025-12-03 15:28:31.868 4458 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 1 to 4>> NetworkUtils: Connection broken: 1 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-03T15:28:31.865795197Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node2 3m 13.419s 2025-12-03 15:28:31.869 4467 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 2 to 4>> NetworkUtils: Connection broken: 2 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-03T15:28:31.866881369Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node1 3m 42.621s 2025-12-03 15:29:01.071 5225 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 456 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 3m 42.648s 2025-12-03 15:29:01.098 5259 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 456 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 3m 42.692s 2025-12-03 15:29:01.142 5277 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 456 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 3m 42.723s 2025-12-03 15:29:01.173 5250 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 456 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 3m 42.832s 2025-12-03 15:29:01.282 5253 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 456 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/456
node2 3m 42.833s 2025-12-03 15:29:01.283 5254 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 456
node3 3m 42.863s 2025-12-03 15:29:01.313 5262 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 456 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/456
node3 3m 42.864s 2025-12-03 15:29:01.314 5263 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 456
node1 3m 42.902s 2025-12-03 15:29:01.352 5228 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 456 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/456
node1 3m 42.903s 2025-12-03 15:29:01.353 5229 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 456
node2 3m 42.925s 2025-12-03 15:29:01.375 5285 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 456
node2 3m 42.927s 2025-12-03 15:29:01.377 5286 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 456 Timestamp: 2025-12-03T15:29:00.236145Z Next consensus number: 15727 Legacy running event hash: 85a2ae3c2384a522b45e8b50bd37cccd38fe796adf2243477c892eef08ce793e85c89b4f55b3061187b9e3ae9b463ed7 Legacy running event mnemonic: envelope-decade-identify-forward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1393774669 Root hash: ed241ad48fc477e04e5eaed5b7b263431832101f6195eaf0b1dccce993485cdad5f2ce64e6bb2a2ebee1408bd4373326 (root) VirtualMap state / fortune-fold-always-winter {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"pledge-boat-crash-tide"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"basic-fence-report-family"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"brand-exclude-torch-effort"}}}
node0 3m 42.934s 2025-12-03 15:29:01.384 5280 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 456 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/456
node0 3m 42.934s 2025-12-03 15:29:01.384 5281 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 456
node2 3m 42.936s 2025-12-03 15:29:01.386 5287 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 42.937s 2025-12-03 15:29:01.387 5288 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 429 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 42.937s 2025-12-03 15:29:01.387 5289 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 3m 42.943s 2025-12-03 15:29:01.393 5294 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 456
node3 3m 42.945s 2025-12-03 15:29:01.395 5295 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 456 Timestamp: 2025-12-03T15:29:00.236145Z Next consensus number: 15727 Legacy running event hash: 85a2ae3c2384a522b45e8b50bd37cccd38fe796adf2243477c892eef08ce793e85c89b4f55b3061187b9e3ae9b463ed7 Legacy running event mnemonic: envelope-decade-identify-forward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1393774669 Root hash: ed241ad48fc477e04e5eaed5b7b263431832101f6195eaf0b1dccce993485cdad5f2ce64e6bb2a2ebee1408bd4373326 (root) VirtualMap state / fortune-fold-always-winter {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"pledge-boat-crash-tide"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"basic-fence-report-family"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"brand-exclude-torch-effort"}}}
node2 3m 42.948s 2025-12-03 15:29:01.398 5290 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 3m 42.948s 2025-12-03 15:29:01.398 5291 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 456 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/456 {"round":456,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/456/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 3m 42.953s 2025-12-03 15:29:01.403 5296 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 42.953s 2025-12-03 15:29:01.403 5297 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 429 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 42.953s 2025-12-03 15:29:01.403 5298 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 3m 42.964s 2025-12-03 15:29:01.414 5299 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 3m 42.965s 2025-12-03 15:29:01.415 5300 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 456 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/456 {"round":456,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/456/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 3m 42.991s 2025-12-03 15:29:01.441 5268 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 456
node1 3m 42.994s 2025-12-03 15:29:01.444 5269 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 456 Timestamp: 2025-12-03T15:29:00.236145Z Next consensus number: 15727 Legacy running event hash: 85a2ae3c2384a522b45e8b50bd37cccd38fe796adf2243477c892eef08ce793e85c89b4f55b3061187b9e3ae9b463ed7 Legacy running event mnemonic: envelope-decade-identify-forward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1393774669 Root hash: ed241ad48fc477e04e5eaed5b7b263431832101f6195eaf0b1dccce993485cdad5f2ce64e6bb2a2ebee1408bd4373326 (root) VirtualMap state / fortune-fold-always-winter {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"pledge-boat-crash-tide"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"basic-fence-report-family"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"brand-exclude-torch-effort"}}}
node1 3m 43.001s 2025-12-03 15:29:01.451 5270 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 43.001s 2025-12-03 15:29:01.451 5271 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 429 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 43.001s 2025-12-03 15:29:01.451 5272 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 3m 43.014s 2025-12-03 15:29:01.464 5273 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 3m 43.014s 2025-12-03 15:29:01.464 5274 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 456 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/456 {"round":456,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/456/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 3m 43.019s 2025-12-03 15:29:01.469 5312 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 456
node0 3m 43.021s 2025-12-03 15:29:01.471 5313 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 456 Timestamp: 2025-12-03T15:29:00.236145Z Next consensus number: 15727 Legacy running event hash: 85a2ae3c2384a522b45e8b50bd37cccd38fe796adf2243477c892eef08ce793e85c89b4f55b3061187b9e3ae9b463ed7 Legacy running event mnemonic: envelope-decade-identify-forward Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1393774669 Root hash: ed241ad48fc477e04e5eaed5b7b263431832101f6195eaf0b1dccce993485cdad5f2ce64e6bb2a2ebee1408bd4373326 (root) VirtualMap state / fortune-fold-always-winter {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"pledge-boat-crash-tide"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"basic-fence-report-family"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"brand-exclude-torch-effort"}}}
node0 3m 43.027s 2025-12-03 15:29:01.477 5314 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 43.027s 2025-12-03 15:29:01.477 5315 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 429 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 43.028s 2025-12-03 15:29:01.478 5316 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 3m 43.038s 2025-12-03 15:29:01.488 5317 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 3m 43.039s 2025-12-03 15:29:01.489 5318 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 456 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/456 {"round":456,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/456/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 4m 42.511s 2025-12-03 15:30:00.961 6957 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 594 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 4m 42.521s 2025-12-03 15:30:00.971 6855 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 594 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 4m 42.586s 2025-12-03 15:30:01.036 6819 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 594 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 4m 42.624s 2025-12-03 15:30:01.074 6810 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 594 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 4m 42.716s 2025-12-03 15:30:01.166 6813 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 594 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/594
node2 4m 42.717s 2025-12-03 15:30:01.167 6814 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 594
node3 4m 42.732s 2025-12-03 15:30:01.182 6822 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 594 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/594
node3 4m 42.733s 2025-12-03 15:30:01.183 6823 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 594
node2 4m 42.800s 2025-12-03 15:30:01.250 6853 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 594
node2 4m 42.802s 2025-12-03 15:30:01.252 6854 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 594 Timestamp: 2025-12-03T15:30:00.034986291Z Next consensus number: 18907 Legacy running event hash: d76a78f92e144b6fc2633e4cb57eae2a361c86947827fb087800964908b694c8b193463d4e8ac7eb3d8065ec2fddbd00 Legacy running event mnemonic: barrel-busy-secret-hospital Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1331948109 Root hash: 98cbb3912abec428886a3afbe30dbed38c129871d85393d77edd014d2dfbaacbedfb3344d70f8d05f529a5e930b3341a (root) VirtualMap state / trade-mistake-fit-kitchen {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"ridge-mechanic-motion-mutual"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"screen-afraid-glad-avocado"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"fossil-age-orange-measure"}}}
node2 4m 42.808s 2025-12-03 15:30:01.258 6855 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+29+20.713170863Z_seq1_minr474_maxr5474_orgn0.pces
node2 4m 42.808s 2025-12-03 15:30:01.258 6856 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 567 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+29+20.713170863Z_seq1_minr474_maxr5474_orgn0.pces
node2 4m 42.809s 2025-12-03 15:30:01.259 6857 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 4m 42.810s 2025-12-03 15:30:01.260 6858 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 4m 42.811s 2025-12-03 15:30:01.261 6859 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 594 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/594 {"round":594,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/594/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 4m 42.811s 2025-12-03 15:30:01.261 6862 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 594
node2 4m 42.812s 2025-12-03 15:30:01.262 6860 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2
node3 4m 42.813s 2025-12-03 15:30:01.263 6863 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 594 Timestamp: 2025-12-03T15:30:00.034986291Z Next consensus number: 18907 Legacy running event hash: d76a78f92e144b6fc2633e4cb57eae2a361c86947827fb087800964908b694c8b193463d4e8ac7eb3d8065ec2fddbd00 Legacy running event mnemonic: barrel-busy-secret-hospital Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1331948109 Root hash: 98cbb3912abec428886a3afbe30dbed38c129871d85393d77edd014d2dfbaacbedfb3344d70f8d05f529a5e930b3341a (root) VirtualMap state / trade-mistake-fit-kitchen {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"ridge-mechanic-motion-mutual"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"screen-afraid-glad-avocado"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"fossil-age-orange-measure"}}}
node3 4m 42.819s 2025-12-03 15:30:01.269 6864 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+29+20.810306748Z_seq1_minr474_maxr5474_orgn0.pces
node3 4m 42.819s 2025-12-03 15:30:01.269 6865 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 567 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+29+20.810306748Z_seq1_minr474_maxr5474_orgn0.pces
node3 4m 42.820s 2025-12-03 15:30:01.270 6866 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 4m 42.821s 2025-12-03 15:30:01.271 6867 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 4m 42.822s 2025-12-03 15:30:01.272 6868 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 594 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/594 {"round":594,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/594/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 4m 42.823s 2025-12-03 15:30:01.273 6869 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2
node1 4m 42.842s 2025-12-03 15:30:01.292 6868 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 594 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/594
node1 4m 42.842s 2025-12-03 15:30:01.292 6869 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 594
node0 4m 42.871s 2025-12-03 15:30:01.321 6960 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 594 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/594
node0 4m 42.872s 2025-12-03 15:30:01.322 6961 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 594
node1 4m 42.935s 2025-12-03 15:30:01.385 6900 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 594
node1 4m 42.938s 2025-12-03 15:30:01.388 6909 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 594 Timestamp: 2025-12-03T15:30:00.034986291Z Next consensus number: 18907 Legacy running event hash: d76a78f92e144b6fc2633e4cb57eae2a361c86947827fb087800964908b694c8b193463d4e8ac7eb3d8065ec2fddbd00 Legacy running event mnemonic: barrel-busy-secret-hospital Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1331948109 Root hash: 98cbb3912abec428886a3afbe30dbed38c129871d85393d77edd014d2dfbaacbedfb3344d70f8d05f529a5e930b3341a (root) VirtualMap state / trade-mistake-fit-kitchen {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"ridge-mechanic-motion-mutual"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"screen-afraid-glad-avocado"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"fossil-age-orange-measure"}}}
node1 4m 42.948s 2025-12-03 15:30:01.398 6910 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+29+20.660623734Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 4m 42.948s 2025-12-03 15:30:01.398 6911 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 567 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+29+20.660623734Z_seq1_minr474_maxr5474_orgn0.pces
node1 4m 42.949s 2025-12-03 15:30:01.399 6912 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 4m 42.951s 2025-12-03 15:30:01.401 6913 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 4m 42.951s 2025-12-03 15:30:01.401 6914 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 594 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/594 {"round":594,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/594/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 4m 42.953s 2025-12-03 15:30:01.403 6915 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2
node0 4m 42.954s 2025-12-03 15:30:01.404 7003 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 594
node0 4m 42.956s 2025-12-03 15:30:01.406 7004 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 594 Timestamp: 2025-12-03T15:30:00.034986291Z Next consensus number: 18907 Legacy running event hash: d76a78f92e144b6fc2633e4cb57eae2a361c86947827fb087800964908b694c8b193463d4e8ac7eb3d8065ec2fddbd00 Legacy running event mnemonic: barrel-busy-secret-hospital Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1331948109 Root hash: 98cbb3912abec428886a3afbe30dbed38c129871d85393d77edd014d2dfbaacbedfb3344d70f8d05f529a5e930b3341a (root) VirtualMap state / trade-mistake-fit-kitchen {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"ridge-mechanic-motion-mutual"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"screen-afraid-glad-avocado"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"fossil-age-orange-measure"}}}
node0 4m 42.964s 2025-12-03 15:30:01.414 7005 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+29+20.660994822Z_seq1_minr474_maxr5474_orgn0.pces
node0 4m 42.964s 2025-12-03 15:30:01.414 7006 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 567 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+29+20.660994822Z_seq1_minr474_maxr5474_orgn0.pces
node0 4m 42.964s 2025-12-03 15:30:01.414 7007 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 4m 42.966s 2025-12-03 15:30:01.416 7008 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 4m 42.967s 2025-12-03 15:30:01.417 7009 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 594 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/594 {"round":594,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/594/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 4m 42.969s 2025-12-03 15:30:01.419 7010 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2
node3 5m 42.668s 2025-12-03 15:31:01.118 8441 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 733 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 5m 42.703s 2025-12-03 15:31:01.153 8428 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 733 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 5m 42.793s 2025-12-03 15:31:01.243 8549 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 733 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 5m 42.816s 2025-12-03 15:31:01.266 8677 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 733 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 5m 42.939s 2025-12-03 15:31:01.389 8552 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 733 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/733
node1 5m 42.940s 2025-12-03 15:31:01.390 8553 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 733
node0 5m 42.979s 2025-12-03 15:31:01.429 8680 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 733 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/733
node0 5m 42.980s 2025-12-03 15:31:01.430 8681 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 733
node2 5m 43.010s 2025-12-03 15:31:01.460 8431 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 733 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/733
node2 5m 43.011s 2025-12-03 15:31:01.461 8432 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 733
node1 5m 43.025s 2025-12-03 15:31:01.475 8584 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 733
node1 5m 43.027s 2025-12-03 15:31:01.477 8585 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 733 Timestamp: 2025-12-03T15:31:00.250945971Z Next consensus number: 22154 Legacy running event hash: 8fe28ea7a6c693e1116075b44b8bc32b1bbfc4d7e08b68725fe4941d5f570a62c06deec1bf9edcb34c5c469673315578 Legacy running event mnemonic: adult-clinic-chef-loop Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1613662989 Root hash: d2a5ecb145b1d078023ce9a84c1d57e7a6f9870143d38909ff3fe72fc966c90e0a53d07525d5a679e2ca61825ef2e03c (root) VirtualMap state / problem-hazard-renew-valve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"uncle-sword-million-bargain"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"hill-define-cart-engage"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"just-delay-hub-soft"}}}
node3 5m 43.033s 2025-12-03 15:31:01.483 8454 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 733 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/733
node1 5m 43.034s 2025-12-03 15:31:01.484 8586 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+29+20.660623734Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 5m 43.034s 2025-12-03 15:31:01.484 8587 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 706 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+29+20.660623734Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 43.034s 2025-12-03 15:31:01.484 8588 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 5m 43.034s 2025-12-03 15:31:01.484 8455 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 733
node1 5m 43.038s 2025-12-03 15:31:01.488 8589 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 5m 43.039s 2025-12-03 15:31:01.489 8590 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 733 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/733 {"round":733,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/733/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 5m 43.040s 2025-12-03 15:31:01.490 8591 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/59
node0 5m 43.062s 2025-12-03 15:31:01.512 8712 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 733
node0 5m 43.064s 2025-12-03 15:31:01.514 8713 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 733 Timestamp: 2025-12-03T15:31:00.250945971Z Next consensus number: 22154 Legacy running event hash: 8fe28ea7a6c693e1116075b44b8bc32b1bbfc4d7e08b68725fe4941d5f570a62c06deec1bf9edcb34c5c469673315578 Legacy running event mnemonic: adult-clinic-chef-loop Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1613662989 Root hash: d2a5ecb145b1d078023ce9a84c1d57e7a6f9870143d38909ff3fe72fc966c90e0a53d07525d5a679e2ca61825ef2e03c (root) VirtualMap state / problem-hazard-renew-valve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"uncle-sword-million-bargain"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"hill-define-cart-engage"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"just-delay-hub-soft"}}}
node0 5m 43.070s 2025-12-03 15:31:01.520 8714 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+29+20.660994822Z_seq1_minr474_maxr5474_orgn0.pces
node0 5m 43.070s 2025-12-03 15:31:01.520 8715 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 706 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+29+20.660994822Z_seq1_minr474_maxr5474_orgn0.pces
node0 5m 43.070s 2025-12-03 15:31:01.520 8716 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 5m 43.074s 2025-12-03 15:31:01.524 8717 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 5m 43.075s 2025-12-03 15:31:01.525 8718 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 733 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/733 {"round":733,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/733/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 5m 43.076s 2025-12-03 15:31:01.526 8719 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/59
node2 5m 43.095s 2025-12-03 15:31:01.545 8463 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 733
node2 5m 43.097s 2025-12-03 15:31:01.547 8464 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 733 Timestamp: 2025-12-03T15:31:00.250945971Z Next consensus number: 22154 Legacy running event hash: 8fe28ea7a6c693e1116075b44b8bc32b1bbfc4d7e08b68725fe4941d5f570a62c06deec1bf9edcb34c5c469673315578 Legacy running event mnemonic: adult-clinic-chef-loop Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1613662989 Root hash: d2a5ecb145b1d078023ce9a84c1d57e7a6f9870143d38909ff3fe72fc966c90e0a53d07525d5a679e2ca61825ef2e03c (root) VirtualMap state / problem-hazard-renew-valve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"uncle-sword-million-bargain"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"hill-define-cart-engage"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"just-delay-hub-soft"}}}
node2 5m 43.105s 2025-12-03 15:31:01.555 8465 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+29+20.713170863Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 43.105s 2025-12-03 15:31:01.555 8466 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 706 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+29+20.713170863Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 43.105s 2025-12-03 15:31:01.555 8467 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 5m 43.109s 2025-12-03 15:31:01.559 8468 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 5m 43.110s 2025-12-03 15:31:01.560 8469 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 733 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/733 {"round":733,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/733/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 5m 43.111s 2025-12-03 15:31:01.561 8470 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/59
node3 5m 43.114s 2025-12-03 15:31:01.564 8489 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 733
node3 5m 43.116s 2025-12-03 15:31:01.566 8490 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 733 Timestamp: 2025-12-03T15:31:00.250945971Z Next consensus number: 22154 Legacy running event hash: 8fe28ea7a6c693e1116075b44b8bc32b1bbfc4d7e08b68725fe4941d5f570a62c06deec1bf9edcb34c5c469673315578 Legacy running event mnemonic: adult-clinic-chef-loop Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1613662989 Root hash: d2a5ecb145b1d078023ce9a84c1d57e7a6f9870143d38909ff3fe72fc966c90e0a53d07525d5a679e2ca61825ef2e03c (root) VirtualMap state / problem-hazard-renew-valve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"uncle-sword-million-bargain"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"hill-define-cart-engage"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"just-delay-hub-soft"}}}
node3 5m 43.122s 2025-12-03 15:31:01.572 8491 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+29+20.810306748Z_seq1_minr474_maxr5474_orgn0.pces
node3 5m 43.122s 2025-12-03 15:31:01.572 8492 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 706 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+29+20.810306748Z_seq1_minr474_maxr5474_orgn0.pces
node3 5m 43.123s 2025-12-03 15:31:01.573 8493 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 5m 43.127s 2025-12-03 15:31:01.577 8494 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 5m 43.127s 2025-12-03 15:31:01.577 8495 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 733 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/733 {"round":733,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/733/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 5m 43.129s 2025-12-03 15:31:01.579 8496 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/59
node4 5m 52.460s 2025-12-03 15:31:10.910 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 5m 52.546s 2025-12-03 15:31:10.996 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 5m 52.562s 2025-12-03 15:31:11.012 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 52.675s 2025-12-03 15:31:11.125 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 5m 52.700s 2025-12-03 15:31:11.150 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 5m 53.955s 2025-12-03 15:31:12.405 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1253ms
node4 5m 53.965s 2025-12-03 15:31:12.415 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 5m 53.969s 2025-12-03 15:31:12.419 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 54.025s 2025-12-03 15:31:12.475 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 5m 54.091s 2025-12-03 15:31:12.541 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 5m 54.092s 2025-12-03 15:31:12.542 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 5m 54.920s 2025-12-03 15:31:13.370 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 5m 55.005s 2025-12-03 15:31:13.455 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 55.012s 2025-12-03 15:31:13.462 16 INFO STARTUP <main> StartupStateUtils: The following saved states were found on disk:
- /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/323 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/187 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/59 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node4 5m 55.012s 2025-12-03 15:31:13.462 17 INFO STARTUP <main> StartupStateUtils: Loading latest state from disk.
node4 5m 55.012s 2025-12-03 15:31:13.462 18 INFO STARTUP <main> StartupStateUtils: Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/323
node4 5m 55.020s 2025-12-03 15:31:13.470 19 INFO STATE_TO_DISK <main> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp
node4 5m 55.134s 2025-12-03 15:31:13.584 29 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 5m 55.930s 2025-12-03 15:31:14.380 31 INFO STARTUP <main> StartupStateUtils: Loaded state's hash is the same as when it was saved.
node4 5m 55.937s 2025-12-03 15:31:14.387 32 INFO STARTUP <main> StartupStateUtils: Platform has loaded a saved state {"round":323,"consensusTimestamp":"2025-12-03T15:28:00.258360Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload]
node4 5m 55.943s 2025-12-03 15:31:14.393 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 55.944s 2025-12-03 15:31:14.394 38 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 5m 55.950s 2025-12-03 15:31:14.400 39 INFO STARTUP <main> AddressBookInitializer: Using the loaded state's address book and weight values.
node4 5m 55.959s 2025-12-03 15:31:14.409 40 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 55.963s 2025-12-03 15:31:14.413 41 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 57.062s 2025-12-03 15:31:15.512 42 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26228217] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=85930, randomLong=1615476986626035670, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11420, randomLong=6531534561101729973, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1240631, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms)
node4 5m 57.095s 2025-12-03 15:31:15.545 43 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 5m 57.226s 2025-12-03 15:31:15.676 44 INFO STARTUP <main> PcesUtilities: Span compaction completed for data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 388
node4 5m 57.229s 2025-12-03 15:31:15.679 45 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 5m 57.231s 2025-12-03 15:31:15.681 46 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 5m 57.330s 2025-12-03 15:31:15.780 47 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "Iq1XLQ==", "port": 30124 }, { "ipAddressV4": "CoAAJw==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "Ih1K7w==", "port": 30125 }, { "ipAddressV4": "CoAAJQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "I96g/w==", "port": 30126 }, { "ipAddressV4": "CoAAIw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkUG8w==", "port": 30127 }, { "ipAddressV4": "CoAAJA==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IodiNg==", "port": 30128 }, { "ipAddressV4": "CoAAKA==", "port": 30128 }] }] }
node4 5m 57.357s 2025-12-03 15:31:15.807 48 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with state long 5047432056479425145.
node4 5m 57.358s 2025-12-03 15:31:15.808 49 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with 322 rounds handled.
node4 5m 57.359s 2025-12-03 15:31:15.809 50 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 5m 57.359s 2025-12-03 15:31:15.809 51 INFO STARTUP <main> TransactionHandlingHistory: Log file found. Parsing previous history
node4 5m 57.406s 2025-12-03 15:31:15.856 52 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 323 Timestamp: 2025-12-03T15:28:00.258360Z Next consensus number: 11644 Legacy running event hash: 55cfb55247c6acfbd1bc8a6729eb9fb325f5d154057948073611378a055f0acc9a3628d969af4c484df8c6ca13279f1a Legacy running event mnemonic: rigid-fiction-web-gaze Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1252127449 Root hash: 4d2c94715453d4afa986e8cf535438cd19cd0b464c182b99f818e044cea0db8c1fcfb486857a7e4296c84b6ea3d86171 (root) VirtualMap state / pitch-impose-media-health {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"walnut-change-close-soap"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tired-funny-behave-obvious"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"issue-arrow-make-code"}}}
node4 5m 57.412s 2025-12-03 15:31:15.862 54 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node4 5m 57.629s 2025-12-03 15:31:16.079 55 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 55cfb55247c6acfbd1bc8a6729eb9fb325f5d154057948073611378a055f0acc9a3628d969af4c484df8c6ca13279f1a
node4 5m 57.639s 2025-12-03 15:31:16.089 56 INFO STARTUP <platformForkJoinThread-3> Shadowgraph: Shadowgraph starting from expiration threshold 296
node4 5m 57.646s 2025-12-03 15:31:16.096 58 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 5m 57.646s 2025-12-03 15:31:16.096 59 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 5m 57.648s 2025-12-03 15:31:16.098 60 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 5m 57.651s 2025-12-03 15:31:16.101 61 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 5m 57.652s 2025-12-03 15:31:16.102 62 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 5m 57.653s 2025-12-03 15:31:16.103 63 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 5m 57.655s 2025-12-03 15:31:16.105 64 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 296
node4 5m 57.664s 2025-12-03 15:31:16.114 65 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 184.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 5m 57.944s 2025-12-03 15:31:16.394 66 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:4 H:4f0ffce33e23 BR:321), num remaining: 4
node4 5m 57.945s 2025-12-03 15:31:16.395 67 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:a90ccf071450 BR:321), num remaining: 3
node4 5m 57.946s 2025-12-03 15:31:16.396 68 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:fed9bd5b7211 BR:321), num remaining: 2
node4 5m 57.946s 2025-12-03 15:31:16.396 69 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:fd5c5c02b4b1 BR:321), num remaining: 1
node4 5m 57.946s 2025-12-03 15:31:16.396 70 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:ae2307beecd7 BR:321), num remaining: 0
node4 5m 58.448s 2025-12-03 15:31:16.898 493 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 3,416 preconsensus events with max birth round 388. These events contained 4,734 transactions. 64 rounds reached consensus spanning 30.0 seconds of consensus time. The latest round to reach consensus is round 387. Replay took 791.0 milliseconds.
node4 5m 58.450s 2025-12-03 15:31:16.900 494 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 5m 58.452s 2025-12-03 15:31:16.902 495 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 786.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 5m 59.317s 2025-12-03 15:31:17.767 650 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Preparing for reconnect, stopping gossip
node4 5m 59.317s 2025-12-03 15:31:17.767 651 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=387,newEventBirthRound=388,ancientThreshold=360,expiredThreshold=296] remote ev=EventWindow[latestConsensusRound=770,newEventBirthRound=771,ancientThreshold=743,expiredThreshold=669]
node4 5m 59.317s 2025-12-03 15:31:17.767 652 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=387,newEventBirthRound=388,ancientThreshold=360,expiredThreshold=296] remote ev=EventWindow[latestConsensusRound=771,newEventBirthRound=772,ancientThreshold=744,expiredThreshold=670]
node4 5m 59.317s 2025-12-03 15:31:17.767 654 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=387,newEventBirthRound=388,ancientThreshold=360,expiredThreshold=296] remote ev=EventWindow[latestConsensusRound=770,newEventBirthRound=771,ancientThreshold=743,expiredThreshold=669]
node4 5m 59.317s 2025-12-03 15:31:17.767 653 INFO RECONNECT <<platform-core: SyncProtocolWith3 4 to 3>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=387,newEventBirthRound=388,ancientThreshold=360,expiredThreshold=296] remote ev=EventWindow[latestConsensusRound=771,newEventBirthRound=772,ancientThreshold=744,expiredThreshold=670]
node4 5m 59.318s 2025-12-03 15:31:17.768 655 INFO PLATFORM_STATUS <platformForkJoinThread-8> StatusStateMachine: Platform spent 864.0 ms in OBSERVING. Now in BEHIND
node4 5m 59.318s 2025-12-03 15:31:17.768 656 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Preparing for reconnect, start clearing queues
node0 5m 59.388s 2025-12-03 15:31:17.838 9138 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=770,newEventBirthRound=771,ancientThreshold=743,expiredThreshold=669] remote ev=EventWindow[latestConsensusRound=387,newEventBirthRound=388,ancientThreshold=360,expiredThreshold=296]
node1 5m 59.388s 2025-12-03 15:31:17.838 9010 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=770,newEventBirthRound=771,ancientThreshold=743,expiredThreshold=669] remote ev=EventWindow[latestConsensusRound=387,newEventBirthRound=388,ancientThreshold=360,expiredThreshold=296]
node2 5m 59.388s 2025-12-03 15:31:17.838 8923 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=771,newEventBirthRound=772,ancientThreshold=744,expiredThreshold=670] remote ev=EventWindow[latestConsensusRound=387,newEventBirthRound=388,ancientThreshold=360,expiredThreshold=296]
node3 5m 59.388s 2025-12-03 15:31:17.838 8964 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=771,newEventBirthRound=772,ancientThreshold=744,expiredThreshold=670] remote ev=EventWindow[latestConsensusRound=387,newEventBirthRound=388,ancientThreshold=360,expiredThreshold=296]
node4 5m 59.470s 2025-12-03 15:31:17.920 657 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Queues have been cleared
node4 5m 59.471s 2025-12-03 15:31:17.921 658 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Waiting for a state to be obtained from a peer
node0 5m 59.565s 2025-12-03 15:31:18.015 9147 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> ReconnectStateTeacher: Starting reconnect in the role of the sender {"receiving":false,"nodeId":0,"otherNodeId":4,"round":771} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node0 5m 59.565s 2025-12-03 15:31:18.015 9148 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> ReconnectStateTeacher: The following state will be sent to the learner:
Round: 771 Timestamp: 2025-12-03T15:31:16.753361824Z Next consensus number: 23065 Legacy running event hash: c7ba98f80d3c0f1bbeb5b57f63f2291dd1cce55684d0654e7ddaed41c5b6c4505bfe11728e3882d25594281f143cd5ca Legacy running event mnemonic: angle-three-pole-toddler Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 908943064 Root hash: 506cba6d677b1c5d7704949a7656552e430bf18732f38e32a8d4c243b7d5587bdd87da9fdcc958ccf0e8c189a28fd54e (root) VirtualMap state / assault-raven-urge-merry {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"toe-immune-food-bean"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"gentle-keen-crumble-acid"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"monster-trip-sadness-power"}}}
node0 5m 59.566s 2025-12-03 15:31:18.016 9149 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> ReconnectStateTeacher: Sending signatures from nodes 0, 1, 3 (signing weight = 37500000000/50000000000) for state hash 506cba6d677b1c5d7704949a7656552e430bf18732f38e32a8d4c243b7d5587bdd87da9fdcc958ccf0e8c189a28fd54e
node0 5m 59.566s 2025-12-03 15:31:18.016 9150 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> ReconnectStateTeacher: Starting synchronization in the role of the sender.
node4 5m 59.634s 2025-12-03 15:31:18.084 659 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> ReconnectStatePeerProtocol: Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":0,"round":387} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node4 5m 59.635s 2025-12-03 15:31:18.085 660 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> ReconnectStateLearner: Receiving signed state signatures
node4 5m 59.637s 2025-12-03 15:31:18.087 661 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> ReconnectStateLearner: Received signatures from nodes 0, 1, 3
node0 5m 59.687s 2025-12-03 15:31:18.137 9164 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node0 5m 59.696s 2025-12-03 15:31:18.146 9165 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@63c6ee4 start run()
node4 5m 59.834s 2025-12-03 15:31:18.284 688 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: learner calls receiveTree()
node4 5m 59.835s 2025-12-03 15:31:18.285 689 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: synchronizing tree
node4 5m 59.835s 2025-12-03 15:31:18.285 690 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node4 5m 59.842s 2025-12-03 15:31:18.292 691 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3dfec2c2 start run()
node4 5m 59.899s 2025-12-03 15:31:18.349 692 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): firstLeafPath: 4 -> 4, lastLeafPath: 8 -> 8
node4 5m 59.899s 2025-12-03 15:31:18.349 693 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): done
node4 6.001m 2025-12-03 15:31:18.518 694 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 6.001m 2025-12-03 15:31:18.519 695 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call nodeRemover.allNodesReceived()
node4 6.001m 2025-12-03 15:31:18.519 696 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived()
node4 6.001m 2025-12-03 15:31:18.519 697 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived(): done
node4 6.001m 2025-12-03 15:31:18.519 698 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call root.endLearnerReconnect()
node4 6.001m 2025-12-03 15:31:18.519 699 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call reconnectIterator.close()
node4 6.001m 2025-12-03 15:31:18.520 700 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call setHashPrivate()
node4 6.002m 2025-12-03 15:31:18.542 710 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call postInit()
node4 6.002m 2025-12-03 15:31:18.543 712 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: endLearnerReconnect() complete
node4 6.002m 2025-12-03 15:31:18.543 713 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: close() complete
node4 6.002m 2025-12-03 15:31:18.543 714 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 6.002m 2025-12-03 15:31:18.544 715 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3dfec2c2 finish run()
node4 6.002m 2025-12-03 15:31:18.545 716 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: received tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node4 6.002m 2025-12-03 15:31:18.545 717 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: synchronization complete
node4 6.002m 2025-12-03 15:31:18.545 718 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: learner calls initialize()
node4 6.002m 2025-12-03 15:31:18.546 719 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: initializing tree
node4 6.002m 2025-12-03 15:31:18.546 720 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: initialization complete
node4 6.002m 2025-12-03 15:31:18.546 721 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: learner calls hash()
node4 6.002m 2025-12-03 15:31:18.546 722 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: hashing tree
node4 6.002m 2025-12-03 15:31:18.546 723 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: hashing complete
node4 6.002m 2025-12-03 15:31:18.546 724 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: learner calls logStatistics()
node4 6.002m 2025-12-03 15:31:18.549 725 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: Finished synchronization {"timeInSeconds":0.26,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":9,"leafNodes":5,"redundantLeafNodes":2,"internalNodes":4,"redundantInternalNodes":0} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload]
node4 6.002m 2025-12-03 15:31:18.550 726 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: ReconnectMapMetrics: transfersFromTeacher=9; transfersFromLearner=8; internalHashes=3; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=5; leafCleanHashes=2; leafData=5; leafCleanData=2
node4 6.002m 2025-12-03 15:31:18.550 727 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> LearningSynchronizer: learner is done synchronizing
node4 6.002m 2025-12-03 15:31:18.551 728 INFO STARTUP <<platform-core: SyncProtocolWith0 4 to 0>> ConsistencyTestingToolState: New State Constructed.
node4 6.002m 2025-12-03 15:31:18.556 729 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> ReconnectStateLearner: Reconnect data usage report {"dataMegabytes":0.0058650970458984375} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload]
node0 6.002m 2025-12-03 15:31:18.564 9169 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@63c6ee4 finish run()
node0 6.002m 2025-12-03 15:31:18.564 9170 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> TeachingSynchronizer: finished sending tree
node0 6.002m 2025-12-03 15:31:18.567 9173 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> ReconnectStateTeacher: Finished synchronization in the role of the sender.
node0 6.003m 2025-12-03 15:31:18.628 9177 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> ReconnectStateTeacher: Finished reconnect in the role of the sender. {"receiving":false,"nodeId":0,"otherNodeId":4,"round":771} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6.003m 2025-12-03 15:31:18.640 730 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> ReconnectStatePeerProtocol: Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":0,"round":771} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6.003m 2025-12-03 15:31:18.641 731 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> ReconnectStatePeerProtocol: Information for state received during reconnect:
Round: 771 Timestamp: 2025-12-03T15:31:16.753361824Z Next consensus number: 23065 Legacy running event hash: c7ba98f80d3c0f1bbeb5b57f63f2291dd1cce55684d0654e7ddaed41c5b6c4505bfe11728e3882d25594281f143cd5ca Legacy running event mnemonic: angle-three-pole-toddler Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 908943064 Root hash: 506cba6d677b1c5d7704949a7656552e430bf18732f38e32a8d4c243b7d5587bdd87da9fdcc958ccf0e8c189a28fd54e (root) VirtualMap state / assault-raven-urge-merry {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"monster-trip-sadness-power"}}}
node4 6.003m 2025-12-03 15:31:18.642 732 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: A state was obtained from a peer
node4 6.003m 2025-12-03 15:31:18.644 733 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: The state obtained from a peer was validated
node4 6.003m 2025-12-03 15:31:18.644 735 DEBUG RECONNECT <<platform-core: reconnectController>> ReconnectController: `loadState` : reloading state
node4 6.003m 2025-12-03 15:31:18.645 736 INFO STARTUP <<platform-core: reconnectController>> ConsistencyTestingToolState: State initialized with state long -2244856604908784462.
node4 6.003m 2025-12-03 15:31:18.645 737 INFO STARTUP <<platform-core: reconnectController>> ConsistencyTestingToolState: State initialized with 770 rounds handled.
node4 6.003m 2025-12-03 15:31:18.645 738 INFO STARTUP <<platform-core: reconnectController>> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6.003m 2025-12-03 15:31:18.645 739 INFO STARTUP <<platform-core: reconnectController>> TransactionHandlingHistory: Log file found. Parsing previous history
node4 6.004m 2025-12-03 15:31:18.671 744 INFO STATE_TO_DISK <<platform-core: reconnectController>> DefaultSavedStateController: Signed state from round 771 created, will eventually be written to disk, for reason: RECONNECT
node4 6.004m 2025-12-03 15:31:18.672 745 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 903.0 ms in BEHIND. Now in RECONNECT_COMPLETE
node4 6.004m 2025-12-03 15:31:18.673 747 INFO STARTUP <platformForkJoinThread-7> Shadowgraph: Shadowgraph starting from expiration threshold 744
node4 6.004m 2025-12-03 15:31:18.675 749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 771 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/771
node4 6.004m 2025-12-03 15:31:18.676 750 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for 771
node4 6.004m 2025-12-03 15:31:18.681 754 INFO EVENT_STREAM <<platform-core: reconnectController>> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: c7ba98f80d3c0f1bbeb5b57f63f2291dd1cce55684d0654e7ddaed41c5b6c4505bfe11728e3882d25594281f143cd5ca
node4 6.004m 2025-12-03 15:31:18.682 757 INFO STARTUP <platformForkJoinThread-5> PcesFileManager: Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr388_orgn0.pces. All future files will have an origin round of 771.
node4 6.004m 2025-12-03 15:31:18.682 759 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Reconnect almost done resuming gossip
node4 6.006m 2025-12-03 15:31:18.829 788 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for 771
node4 6.006m 2025-12-03 15:31:18.833 789 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 771 Timestamp: 2025-12-03T15:31:16.753361824Z Next consensus number: 23065 Legacy running event hash: c7ba98f80d3c0f1bbeb5b57f63f2291dd1cce55684d0654e7ddaed41c5b6c4505bfe11728e3882d25594281f143cd5ca Legacy running event mnemonic: angle-three-pole-toddler Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 908943064 Root hash: 506cba6d677b1c5d7704949a7656552e430bf18732f38e32a8d4c243b7d5587bdd87da9fdcc958ccf0e8c189a28fd54e (root) VirtualMap state / assault-raven-urge-merry {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"toe-immune-food-bean"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"gentle-keen-crumble-acid"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"monster-trip-sadness-power"}}}
node4 6.007m 2025-12-03 15:31:18.864 790 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr388_orgn0.pces
node4 6.007m 2025-12-03 15:31:18.864 791 WARN STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: No preconsensus event files meeting specified criteria found to copy. Lower bound: 744
node4 6.007m 2025-12-03 15:31:18.870 792 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 771 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/771 {"round":771,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/771/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6.007m 2025-12-03 15:31:18.873 793 INFO PLATFORM_STATUS <platformForkJoinThread-8> StatusStateMachine: Platform spent 200.0 ms in RECONNECT_COMPLETE. Now in CHECKING
node4 6.011m 2025-12-03 15:31:19.109 794 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 6.011m 2025-12-03 15:31:19.115 795 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 6m 1.378s 2025-12-03 15:31:19.828 796 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:8ab7266af0f2 BR:769), num remaining: 3
node4 6m 1.379s 2025-12-03 15:31:19.829 797 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:590a1449aa92 BR:769), num remaining: 2
node4 6m 1.380s 2025-12-03 15:31:19.830 798 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:59fc865fa43a BR:770), num remaining: 1
node4 6m 1.380s 2025-12-03 15:31:19.830 799 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:6e8540c24e03 BR:770), num remaining: 0
node4 6m 5.081s 2025-12-03 15:31:23.531 924 INFO PLATFORM_STATUS <platformForkJoinThread-8> StatusStateMachine: Platform spent 4.7 s in CHECKING. Now in ACTIVE
node2 6m 42.854s 2025-12-03 15:32:01.304 9989 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 866 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 6m 42.966s 2025-12-03 15:32:01.416 10072 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 866 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 6m 42.977s 2025-12-03 15:32:01.427 10231 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 866 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 6m 42.988s 2025-12-03 15:32:01.438 1833 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 866 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 6m 43.132s 2025-12-03 15:32:01.582 10030 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 866 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 6m 43.194s 2025-12-03 15:32:01.644 9992 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 866 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/866
node2 6m 43.195s 2025-12-03 15:32:01.645 9993 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 866
node4 6m 43.201s 2025-12-03 15:32:01.651 1836 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 866 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/866
node4 6m 43.202s 2025-12-03 15:32:01.652 1837 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for 866
node3 6m 43.206s 2025-12-03 15:32:01.656 10033 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 866 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/866
node3 6m 43.207s 2025-12-03 15:32:01.657 10034 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 866
node0 6m 43.264s 2025-12-03 15:32:01.714 10234 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 866 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/866
node0 6m 43.265s 2025-12-03 15:32:01.715 10235 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for 866
node2 6m 43.279s 2025-12-03 15:32:01.729 10024 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 866
node2 6m 43.281s 2025-12-03 15:32:01.731 10025 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 866 Timestamp: 2025-12-03T15:32:00.315579Z Next consensus number: 26409 Legacy running event hash: d40caa07272aa21adbf4521a54258c02115882f56b5ac9d67d0ce80df9f7542f06ebb2f2311525c83525328621e75a4f Legacy running event mnemonic: leader-sand-torch-rubber Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 611256282 Root hash: b103aad29c1d0bb583c38b62e4e81621ffb0601761f8838482a755b9b40ab7ef4755cd07ce6dc038c7a97a539a8b148d (root) VirtualMap state / text-place-ladder-issue {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"inform-either-maze-smart"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"velvet-bind-ship-book"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"kidney-blade-east-bird"}}}
node2 6m 43.291s 2025-12-03 15:32:01.741 10026 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+29+20.713170863Z_seq1_minr474_maxr5474_orgn0.pces
node3 6m 43.291s 2025-12-03 15:32:01.741 10065 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 866
node3 6m 43.293s 2025-12-03 15:32:01.743 10066 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 866 Timestamp: 2025-12-03T15:32:00.315579Z Next consensus number: 26409 Legacy running event hash: d40caa07272aa21adbf4521a54258c02115882f56b5ac9d67d0ce80df9f7542f06ebb2f2311525c83525328621e75a4f Legacy running event mnemonic: leader-sand-torch-rubber Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 611256282 Root hash: b103aad29c1d0bb583c38b62e4e81621ffb0601761f8838482a755b9b40ab7ef4755cd07ce6dc038c7a97a539a8b148d (root) VirtualMap state / text-place-ladder-issue {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"inform-either-maze-smart"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"velvet-bind-ship-book"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"kidney-blade-east-bird"}}}
node2 6m 43.294s 2025-12-03 15:32:01.744 10027 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 839 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+29+20.713170863Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 43.295s 2025-12-03 15:32:01.745 10028 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 6m 43.300s 2025-12-03 15:32:01.750 10067 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+29+20.810306748Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 43.301s 2025-12-03 15:32:01.751 10029 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 6m 43.302s 2025-12-03 15:32:01.752 10030 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 866 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/866 {"round":866,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/866/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 6m 43.303s 2025-12-03 15:32:01.753 10068 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 839 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+29+20.810306748Z_seq1_minr474_maxr5474_orgn0.pces
node3 6m 43.303s 2025-12-03 15:32:01.753 10069 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 6m 43.304s 2025-12-03 15:32:01.754 10031 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/187
node3 6m 43.310s 2025-12-03 15:32:01.760 10070 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 6m 43.310s 2025-12-03 15:32:01.760 10071 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 866 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/866 {"round":866,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/866/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 6m 43.312s 2025-12-03 15:32:01.762 10072 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/187
node4 6m 43.314s 2025-12-03 15:32:01.764 1882 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for 866
node4 6m 43.316s 2025-12-03 15:32:01.766 1883 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 866 Timestamp: 2025-12-03T15:32:00.315579Z Next consensus number: 26409 Legacy running event hash: d40caa07272aa21adbf4521a54258c02115882f56b5ac9d67d0ce80df9f7542f06ebb2f2311525c83525328621e75a4f Legacy running event mnemonic: leader-sand-torch-rubber Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 611256282 Root hash: b103aad29c1d0bb583c38b62e4e81621ffb0601761f8838482a755b9b40ab7ef4755cd07ce6dc038c7a97a539a8b148d (root) VirtualMap state / text-place-ladder-issue {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"inform-either-maze-smart"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"velvet-bind-ship-book"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"kidney-blade-east-bird"}}}
node4 6m 43.327s 2025-12-03 15:32:01.777 1884 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr388_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+31+19.260824322Z_seq1_minr744_maxr1244_orgn771.pces
node4 6m 43.327s 2025-12-03 15:32:01.777 1885 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 839 File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+31+19.260824322Z_seq1_minr744_maxr1244_orgn771.pces
node4 6m 43.327s 2025-12-03 15:32:01.777 1886 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 6m 43.332s 2025-12-03 15:32:01.782 1887 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 6m 43.333s 2025-12-03 15:32:01.783 1888 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 866 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/866 {"round":866,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/866/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 43.334s 2025-12-03 15:32:01.784 1889 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node1 6m 43.335s 2025-12-03 15:32:01.785 10075 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 866 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/866
node1 6m 43.336s 2025-12-03 15:32:01.786 10076 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 866
node0 6m 43.343s 2025-12-03 15:32:01.793 10274 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for 866
node0 6m 43.345s 2025-12-03 15:32:01.795 10275 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 866 Timestamp: 2025-12-03T15:32:00.315579Z Next consensus number: 26409 Legacy running event hash: d40caa07272aa21adbf4521a54258c02115882f56b5ac9d67d0ce80df9f7542f06ebb2f2311525c83525328621e75a4f Legacy running event mnemonic: leader-sand-torch-rubber Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 611256282 Root hash: b103aad29c1d0bb583c38b62e4e81621ffb0601761f8838482a755b9b40ab7ef4755cd07ce6dc038c7a97a539a8b148d (root) VirtualMap state / text-place-ladder-issue {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"inform-either-maze-smart"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"velvet-bind-ship-book"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"kidney-blade-east-bird"}}}
node0 6m 43.351s 2025-12-03 15:32:01.801 10276 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+29+20.660994822Z_seq1_minr474_maxr5474_orgn0.pces
node0 6m 43.351s 2025-12-03 15:32:01.801 10277 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 839 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+29+20.660994822Z_seq1_minr474_maxr5474_orgn0.pces
node0 6m 43.351s 2025-12-03 15:32:01.801 10278 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 6m 43.358s 2025-12-03 15:32:01.808 10279 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 6m 43.359s 2025-12-03 15:32:01.809 10280 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 866 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/866 {"round":866,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/866/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 6m 43.360s 2025-12-03 15:32:01.810 10281 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/187
node1 6m 43.420s 2025-12-03 15:32:01.870 10113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 866
node1 6m 43.422s 2025-12-03 15:32:01.872 10114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 866 Timestamp: 2025-12-03T15:32:00.315579Z Next consensus number: 26409 Legacy running event hash: d40caa07272aa21adbf4521a54258c02115882f56b5ac9d67d0ce80df9f7542f06ebb2f2311525c83525328621e75a4f Legacy running event mnemonic: leader-sand-torch-rubber Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 611256282 Root hash: b103aad29c1d0bb583c38b62e4e81621ffb0601761f8838482a755b9b40ab7ef4755cd07ce6dc038c7a97a539a8b148d (root) VirtualMap state / text-place-ladder-issue {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"inform-either-maze-smart"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"velvet-bind-ship-book"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"kidney-blade-east-bird"}}}
node1 6m 43.432s 2025-12-03 15:32:01.882 10115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+29+20.660623734Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 6m 43.435s 2025-12-03 15:32:01.885 10116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 839 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+29+20.660623734Z_seq1_minr474_maxr5474_orgn0.pces
node1 6m 43.436s 2025-12-03 15:32:01.886 10117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 6m 43.442s 2025-12-03 15:32:01.892 10118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 6m 43.443s 2025-12-03 15:32:01.893 10119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 866 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/866 {"round":866,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/866/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 6m 43.445s 2025-12-03 15:32:01.895 10120 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/187
node3 7m 43.689s 2025-12-03 15:33:02.139 11487 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 998 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 7m 43.692s 2025-12-03 15:33:02.142 11686 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 998 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 7m 43.722s 2025-12-03 15:33:02.172 11573 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 998 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 7m 43.841s 2025-12-03 15:33:02.291 3298 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 998 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 7m 43.844s 2025-12-03 15:33:02.294 11480 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 998 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 7m 43.895s 2025-12-03 15:33:02.345 11489 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 998 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/998
node2 7m 43.896s 2025-12-03 15:33:02.346 11490 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 998
node1 7m 43.911s 2025-12-03 15:33:02.361 11582 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 998 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/998
node1 7m 43.912s 2025-12-03 15:33:02.362 11583 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 998
node0 7m 43.917s 2025-12-03 15:33:02.367 11695 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 998 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/998
node0 7m 43.917s 2025-12-03 15:33:02.367 11696 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for 998
node4 7m 43.986s 2025-12-03 15:33:02.436 3307 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 998 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/998
node4 7m 43.987s 2025-12-03 15:33:02.437 3308 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for 998
node1 7m 43.991s 2025-12-03 15:33:02.441 11634 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 998
node1 7m 43.993s 2025-12-03 15:33:02.443 11635 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 998 Timestamp: 2025-12-03T15:33:00.159038Z Next consensus number: 31155 Legacy running event hash: 9baf3e77507287637024c0b764a8a38b3c628d27bdcaf2c02fa13832b48944865857b89c85547f159e5dba46e5ac360b Legacy running event mnemonic: mechanic-solve-atom-beauty Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -186347110 Root hash: 735561838dbeee4e0f3341663fca5414b22e533dde4c01ea468659edc47e0491f4c269b6787d130ee8f72d07b7c14fb0 (root) VirtualMap state / height-corn-embark-curve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sunset-destroy-theory-boss"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"exotic-fury-can-chef"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"wife-sand-word-critic"}}}
node1 7m 43.998s 2025-12-03 15:33:02.448 11636 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+29+20.660623734Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+25+33.775812400Z_seq0_minr1_maxr501_orgn0.pces
node1 7m 43.998s 2025-12-03 15:33:02.448 11637 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 971 File: data/saved/preconsensus-events/1/2025/12/03/2025-12-03T15+29+20.660623734Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 43.998s 2025-12-03 15:33:02.448 11638 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 7m 44.001s 2025-12-03 15:33:02.451 11541 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 998
node0 7m 44.002s 2025-12-03 15:33:02.452 11731 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for 998
node2 7m 44.003s 2025-12-03 15:33:02.453 11542 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 998 Timestamp: 2025-12-03T15:33:00.159038Z Next consensus number: 31155 Legacy running event hash: 9baf3e77507287637024c0b764a8a38b3c628d27bdcaf2c02fa13832b48944865857b89c85547f159e5dba46e5ac360b Legacy running event mnemonic: mechanic-solve-atom-beauty Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -186347110 Root hash: 735561838dbeee4e0f3341663fca5414b22e533dde4c01ea468659edc47e0491f4c269b6787d130ee8f72d07b7c14fb0 (root) VirtualMap state / height-corn-embark-curve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sunset-destroy-theory-boss"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"exotic-fury-can-chef"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"wife-sand-word-critic"}}}
node0 7m 44.004s 2025-12-03 15:33:02.454 11732 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 998 Timestamp: 2025-12-03T15:33:00.159038Z Next consensus number: 31155 Legacy running event hash: 9baf3e77507287637024c0b764a8a38b3c628d27bdcaf2c02fa13832b48944865857b89c85547f159e5dba46e5ac360b Legacy running event mnemonic: mechanic-solve-atom-beauty Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -186347110 Root hash: 735561838dbeee4e0f3341663fca5414b22e533dde4c01ea468659edc47e0491f4c269b6787d130ee8f72d07b7c14fb0 (root) VirtualMap state / height-corn-embark-curve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sunset-destroy-theory-boss"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"exotic-fury-can-chef"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"wife-sand-word-critic"}}}
node1 7m 44.008s 2025-12-03 15:33:02.458 11639 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 7m 44.009s 2025-12-03 15:33:02.459 11640 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 998 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/998 {"round":998,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/998/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 7m 44.010s 2025-12-03 15:33:02.460 11641 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/323
node0 7m 44.011s 2025-12-03 15:33:02.461 11733 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+25+33.669128490Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+29+20.660994822Z_seq1_minr474_maxr5474_orgn0.pces
node0 7m 44.011s 2025-12-03 15:33:02.461 11734 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 971 File: data/saved/preconsensus-events/0/2025/12/03/2025-12-03T15+29+20.660994822Z_seq1_minr474_maxr5474_orgn0.pces
node0 7m 44.011s 2025-12-03 15:33:02.461 11735 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 7m 44.011s 2025-12-03 15:33:02.461 11543 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+25+33.811195325Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+29+20.713170863Z_seq1_minr474_maxr5474_orgn0.pces
node2 7m 44.011s 2025-12-03 15:33:02.461 11544 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 971 File: data/saved/preconsensus-events/2/2025/12/03/2025-12-03T15+29+20.713170863Z_seq1_minr474_maxr5474_orgn0.pces
node2 7m 44.011s 2025-12-03 15:33:02.461 11545 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 7m 44.021s 2025-12-03 15:33:02.471 11736 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 7m 44.021s 2025-12-03 15:33:02.471 11546 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 7m 44.022s 2025-12-03 15:33:02.472 11737 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 998 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/998 {"round":998,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/998/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 7m 44.022s 2025-12-03 15:33:02.472 11547 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 998 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/998 {"round":998,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/998/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 7m 44.023s 2025-12-03 15:33:02.473 11738 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/323
node2 7m 44.023s 2025-12-03 15:33:02.473 11548 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/323
node3 7m 44.059s 2025-12-03 15:33:02.509 11496 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 998 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/998
node3 7m 44.060s 2025-12-03 15:33:02.510 11497 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 998
node4 7m 44.117s 2025-12-03 15:33:02.567 3365 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for 998
node4 7m 44.119s 2025-12-03 15:33:02.569 3366 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 998 Timestamp: 2025-12-03T15:33:00.159038Z Next consensus number: 31155 Legacy running event hash: 9baf3e77507287637024c0b764a8a38b3c628d27bdcaf2c02fa13832b48944865857b89c85547f159e5dba46e5ac360b Legacy running event mnemonic: mechanic-solve-atom-beauty Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -186347110 Root hash: 735561838dbeee4e0f3341663fca5414b22e533dde4c01ea468659edc47e0491f4c269b6787d130ee8f72d07b7c14fb0 (root) VirtualMap state / height-corn-embark-curve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sunset-destroy-theory-boss"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"exotic-fury-can-chef"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"wife-sand-word-critic"}}}
node4 7m 44.129s 2025-12-03 15:33:02.579 3367 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+25+33.471511276Z_seq0_minr1_maxr388_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+31+19.260824322Z_seq1_minr744_maxr1244_orgn771.pces
node4 7m 44.129s 2025-12-03 15:33:02.579 3368 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 971 File: data/saved/preconsensus-events/4/2025/12/03/2025-12-03T15+31+19.260824322Z_seq1_minr744_maxr1244_orgn771.pces
node4 7m 44.129s 2025-12-03 15:33:02.579 3369 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 7m 44.136s 2025-12-03 15:33:02.586 3370 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 7m 44.136s 2025-12-03 15:33:02.586 3371 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 998 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/998 {"round":998,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/998/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 7m 44.138s 2025-12-03 15:33:02.588 3372 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/59
node3 7m 44.139s 2025-12-03 15:33:02.589 11532 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 998
node3 7m 44.141s 2025-12-03 15:33:02.591 11533 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 998 Timestamp: 2025-12-03T15:33:00.159038Z Next consensus number: 31155 Legacy running event hash: 9baf3e77507287637024c0b764a8a38b3c628d27bdcaf2c02fa13832b48944865857b89c85547f159e5dba46e5ac360b Legacy running event mnemonic: mechanic-solve-atom-beauty Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -186347110 Root hash: 735561838dbeee4e0f3341663fca5414b22e533dde4c01ea468659edc47e0491f4c269b6787d130ee8f72d07b7c14fb0 (root) VirtualMap state / height-corn-embark-curve {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sunset-destroy-theory-boss"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"exotic-fury-can-chef"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"other-erosion-scorpion-lunch"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"wife-sand-word-critic"}}}
node3 7m 44.148s 2025-12-03 15:33:02.598 11534 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+25+33.667720Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+29+20.810306748Z_seq1_minr474_maxr5474_orgn0.pces
node3 7m 44.148s 2025-12-03 15:33:02.598 11535 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 971 File: data/saved/preconsensus-events/3/2025/12/03/2025-12-03T15+29+20.810306748Z_seq1_minr474_maxr5474_orgn0.pces
node3 7m 44.148s 2025-12-03 15:33:02.598 11536 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 7m 44.158s 2025-12-03 15:33:02.608 11537 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 7m 44.158s 2025-12-03 15:33:02.608 11538 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 998 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/998 {"round":998,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/998/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 7m 44.160s 2025-12-03 15:33:02.610 11539 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/323
node0 7m 57.122s 2025-12-03 15:33:15.572 12074 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith1 0 to 1>> NetworkUtils: Connection broken: 0 -> 1
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-03T15:33:15.571384059Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node2 7m 57.123s 2025-12-03 15:33:15.573 11866 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith1 2 to 1>> NetworkUtils: Connection broken: 2 <- 1
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-03T15:33:15.571638563Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node3 7m 57.124s 2025-12-03 15:33:15.574 11895 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith1 3 to 1>> NetworkUtils: Connection broken: 3 <- 1
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-03T15:33:15.570940968Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node4 7m 57.127s 2025-12-03 15:33:15.577 3700 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith1 4 to 1>> NetworkUtils: Connection broken: 4 <- 1
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-03T15:33:15.575569348Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more