Node ID







Columns











Log Level





Log Marker








Class

















































node1 0.000ns 2025-12-04 17:28:14.995 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node0 46.000ms 2025-12-04 17:28:15.041 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node1 87.000ms 2025-12-04 17:28:15.082 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node1 103.000ms 2025-12-04 17:28:15.098 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node0 133.000ms 2025-12-04 17:28:15.128 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node0 149.000ms 2025-12-04 17:28:15.144 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 206.000ms 2025-12-04 17:28:15.201 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node1 211.000ms 2025-12-04 17:28:15.206 4 INFO STARTUP <main> Browser: The following nodes [1] are set to run locally
node1 237.000ms 2025-12-04 17:28:15.232 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 255.000ms 2025-12-04 17:28:15.250 4 INFO STARTUP <main> Browser: The following nodes [0] are set to run locally
node0 281.000ms 2025-12-04 17:28:15.276 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node2 292.000ms 2025-12-04 17:28:15.287 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node2 308.000ms 2025-12-04 17:28:15.303 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 415.000ms 2025-12-04 17:28:15.410 4 INFO STARTUP <main> Browser: The following nodes [2] are set to run locally
node2 440.000ms 2025-12-04 17:28:15.435 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 1.419s 2025-12-04 17:28:16.414 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1137ms
node0 1.428s 2025-12-04 17:28:16.423 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node0 1.431s 2025-12-04 17:28:16.426 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 1.449s 2025-12-04 17:28:16.444 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node0 1.468s 2025-12-04 17:28:16.463 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 1.534s 2025-12-04 17:28:16.529 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node0 1.535s 2025-12-04 17:28:16.530 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node1 1.540s 2025-12-04 17:28:16.535 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1302ms
node3 1.545s 2025-12-04 17:28:16.540 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node1 1.549s 2025-12-04 17:28:16.544 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node1 1.551s 2025-12-04 17:28:16.546 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 1.563s 2025-12-04 17:28:16.558 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node1 1.604s 2025-12-04 17:28:16.599 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node1 1.666s 2025-12-04 17:28:16.661 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node1 1.667s 2025-12-04 17:28:16.662 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node3 1.682s 2025-12-04 17:28:16.677 4 INFO STARTUP <main> Browser: The following nodes [3] are set to run locally
node2 1.701s 2025-12-04 17:28:16.696 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1261ms
node2 1.710s 2025-12-04 17:28:16.705 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node3 1.712s 2025-12-04 17:28:16.707 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node2 1.713s 2025-12-04 17:28:16.708 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node2 1.751s 2025-12-04 17:28:16.746 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node2 1.819s 2025-12-04 17:28:16.814 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node2 1.820s 2025-12-04 17:28:16.815 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 1.940s 2025-12-04 17:28:16.935 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 2.032s 2025-12-04 17:28:17.027 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 2.049s 2025-12-04 17:28:17.044 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 2.163s 2025-12-04 17:28:17.158 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 2.191s 2025-12-04 17:28:17.186 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node0 2.337s 2025-12-04 17:28:17.332 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 2.423s 2025-12-04 17:28:17.418 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 2.425s 2025-12-04 17:28:17.420 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node0 2.460s 2025-12-04 17:28:17.455 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 2.469s 2025-12-04 17:28:17.464 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node1 2.564s 2025-12-04 17:28:17.559 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 2.566s 2025-12-04 17:28:17.561 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node1 2.602s 2025-12-04 17:28:17.597 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node2 2.630s 2025-12-04 17:28:17.625 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node2 2.725s 2025-12-04 17:28:17.720 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 2.727s 2025-12-04 17:28:17.722 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node2 2.760s 2025-12-04 17:28:17.755 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node3 3.127s 2025-12-04 17:28:18.122 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1414ms
node3 3.136s 2025-12-04 17:28:18.131 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node3 3.139s 2025-12-04 17:28:18.134 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node3 3.177s 2025-12-04 17:28:18.172 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node0 3.237s 2025-12-04 17:28:18.232 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 3.240s 2025-12-04 17:28:18.235 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node3 3.243s 2025-12-04 17:28:18.238 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node3 3.243s 2025-12-04 17:28:18.238 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node0 3.246s 2025-12-04 17:28:18.241 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node0 3.255s 2025-12-04 17:28:18.250 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node0 3.257s 2025-12-04 17:28:18.252 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 3.349s 2025-12-04 17:28:18.344 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 3.351s 2025-12-04 17:28:18.346 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node1 3.357s 2025-12-04 17:28:18.352 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node1 3.365s 2025-12-04 17:28:18.360 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node1 3.368s 2025-12-04 17:28:18.363 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 3.392s 2025-12-04 17:28:18.387 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1200ms
node4 3.402s 2025-12-04 17:28:18.397 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 3.405s 2025-12-04 17:28:18.400 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 3.454s 2025-12-04 17:28:18.449 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node2 3.510s 2025-12-04 17:28:18.505 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node2 3.512s 2025-12-04 17:28:18.507 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node2 3.517s 2025-12-04 17:28:18.512 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node2 3.526s 2025-12-04 17:28:18.521 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 3.527s 2025-12-04 17:28:18.522 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 3.528s 2025-12-04 17:28:18.523 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node2 3.529s 2025-12-04 17:28:18.524 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 4.136s 2025-12-04 17:28:19.131 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node3 4.225s 2025-12-04 17:28:19.220 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 4.227s 2025-12-04 17:28:19.222 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node3 4.261s 2025-12-04 17:28:19.256 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 4.364s 2025-12-04 17:28:19.359 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node0 4.368s 2025-12-04 17:28:19.363 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26320555] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=190480, randomLong=-8031557644152412748, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=10690, randomLong=5108027756721793277, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1380350, data=35, exception=null] OS Health Check Report - Complete (took 1023 ms)
node0 4.398s 2025-12-04 17:28:19.393 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node0 4.405s 2025-12-04 17:28:19.400 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node0 4.407s 2025-12-04 17:28:19.402 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 4.454s 2025-12-04 17:28:19.449 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 4.456s 2025-12-04 17:28:19.451 16 INFO STARTUP <main> StartupStateUtils: No saved states were found on disk.
node1 4.472s 2025-12-04 17:28:19.467 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26351573] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=191639, randomLong=-4299815896692735388, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=12250, randomLong=731202710424888387, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1327955, data=35, exception=null] OS Health Check Report - Complete (took 1022 ms)
node0 4.491s 2025-12-04 17:28:19.486 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "aMZLKQ==", "port": 30124 }, { "ipAddressV4": "CoAAYQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "I8EOdg==", "port": 30125 }, { "ipAddressV4": "CoAAYg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "IirSGg==", "port": 30126 }, { "ipAddressV4": "CoAAXw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "I+9VTg==", "port": 30127 }, { "ipAddressV4": "CoAAYw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "Ih814w==", "port": 30128 }, { "ipAddressV4": "CoAAYA==", "port": 30128 }] }] }
node4 4.492s 2025-12-04 17:28:19.487 21 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node1 4.501s 2025-12-04 17:28:19.496 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node1 4.508s 2025-12-04 17:28:19.503 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node1 4.510s 2025-12-04 17:28:19.505 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node0 4.513s 2025-12-04 17:28:19.508 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv
node0 4.514s 2025-12-04 17:28:19.509 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node0 4.528s 2025-12-04 17:28:19.523 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 00741c10f454d86bfb98ef6aea6349a48960475b4d4dc2acf941bcb64d429c1e178b7e234f63b44c1b23ea6d7e67a391 (root) VirtualMap state / attend-link-intact-verify {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node0 4.531s 2025-12-04 17:28:19.526 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node1 4.595s 2025-12-04 17:28:19.590 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "aMZLKQ==", "port": 30124 }, { "ipAddressV4": "CoAAYQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "I8EOdg==", "port": 30125 }, { "ipAddressV4": "CoAAYg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "IirSGg==", "port": 30126 }, { "ipAddressV4": "CoAAXw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "I+9VTg==", "port": 30127 }, { "ipAddressV4": "CoAAYw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "Ih814w==", "port": 30128 }, { "ipAddressV4": "CoAAYA==", "port": 30128 }] }] }
node1 4.617s 2025-12-04 17:28:19.612 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv
node1 4.618s 2025-12-04 17:28:19.613 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node1 4.632s 2025-12-04 17:28:19.627 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 00741c10f454d86bfb98ef6aea6349a48960475b4d4dc2acf941bcb64d429c1e178b7e234f63b44c1b23ea6d7e67a391 (root) VirtualMap state / attend-link-intact-verify {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node1 4.635s 2025-12-04 17:28:19.630 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node2 4.635s 2025-12-04 17:28:19.630 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26185642] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=186570, randomLong=-3445678682462611230, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=10450, randomLong=257051317155412105, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1187200, data=35, exception=null] OS Health Check Report - Complete (took 1021 ms)
node2 4.664s 2025-12-04 17:28:19.659 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node2 4.671s 2025-12-04 17:28:19.666 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node2 4.673s 2025-12-04 17:28:19.668 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node0 4.745s 2025-12-04 17:28:19.740 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node0 4.750s 2025-12-04 17:28:19.745 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node0 4.754s 2025-12-04 17:28:19.749 43 INFO STARTUP <<start-node-0>> ConsistencyTestingToolMain: init called in Main for node 0.
node0 4.754s 2025-12-04 17:28:19.749 44 INFO STARTUP <<start-node-0>> SwirldsPlatform: Starting platform 0
node0 4.756s 2025-12-04 17:28:19.751 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node2 4.756s 2025-12-04 17:28:19.751 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "aMZLKQ==", "port": 30124 }, { "ipAddressV4": "CoAAYQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "I8EOdg==", "port": 30125 }, { "ipAddressV4": "CoAAYg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "IirSGg==", "port": 30126 }, { "ipAddressV4": "CoAAXw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "I+9VTg==", "port": 30127 }, { "ipAddressV4": "CoAAYw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "Ih814w==", "port": 30128 }, { "ipAddressV4": "CoAAYA==", "port": 30128 }] }] }
node0 4.759s 2025-12-04 17:28:19.754 46 INFO STARTUP <<start-node-0>> CycleFinder: No cyclical back pressure detected in wiring model.
node0 4.760s 2025-12-04 17:28:19.755 47 INFO STARTUP <<start-node-0>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node0 4.760s 2025-12-04 17:28:19.755 48 INFO STARTUP <<start-node-0>> InputWireChecks: All input wires have been bound.
node0 4.762s 2025-12-04 17:28:19.757 49 WARN STARTUP <<start-node-0>> PcesFileTracker: No preconsensus event files available
node0 4.762s 2025-12-04 17:28:19.757 50 INFO STARTUP <<start-node-0>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node0 4.764s 2025-12-04 17:28:19.759 51 INFO STARTUP <<start-node-0>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node0 4.765s 2025-12-04 17:28:19.760 52 INFO STARTUP <<app: appMain 0>> ConsistencyTestingToolMain: run called in Main.
node0 4.766s 2025-12-04 17:28:19.761 53 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 183.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node0 4.772s 2025-12-04 17:28:19.767 54 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node2 4.777s 2025-12-04 17:28:19.772 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv
node2 4.778s 2025-12-04 17:28:19.773 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node2 4.793s 2025-12-04 17:28:19.788 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 00741c10f454d86bfb98ef6aea6349a48960475b4d4dc2acf941bcb64d429c1e178b7e234f63b44c1b23ea6d7e67a391 (root) VirtualMap state / attend-link-intact-verify {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node2 4.796s 2025-12-04 17:28:19.791 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node1 4.848s 2025-12-04 17:28:19.843 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node1 4.852s 2025-12-04 17:28:19.847 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node1 4.856s 2025-12-04 17:28:19.851 43 INFO STARTUP <<start-node-1>> ConsistencyTestingToolMain: init called in Main for node 1.
node1 4.857s 2025-12-04 17:28:19.852 44 INFO STARTUP <<start-node-1>> SwirldsPlatform: Starting platform 1
node1 4.858s 2025-12-04 17:28:19.853 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node1 4.861s 2025-12-04 17:28:19.856 46 INFO STARTUP <<start-node-1>> CycleFinder: No cyclical back pressure detected in wiring model.
node1 4.862s 2025-12-04 17:28:19.857 47 INFO STARTUP <<start-node-1>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node1 4.863s 2025-12-04 17:28:19.858 48 INFO STARTUP <<start-node-1>> InputWireChecks: All input wires have been bound.
node1 4.865s 2025-12-04 17:28:19.860 49 WARN STARTUP <<start-node-1>> PcesFileTracker: No preconsensus event files available
node1 4.865s 2025-12-04 17:28:19.860 50 INFO STARTUP <<start-node-1>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node1 4.867s 2025-12-04 17:28:19.862 51 INFO STARTUP <<start-node-1>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node1 4.869s 2025-12-04 17:28:19.864 52 INFO STARTUP <<app: appMain 1>> ConsistencyTestingToolMain: run called in Main.
node1 4.870s 2025-12-04 17:28:19.865 53 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 183.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node1 4.876s 2025-12-04 17:28:19.871 54 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node2 4.998s 2025-12-04 17:28:19.993 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node2 5.003s 2025-12-04 17:28:19.998 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node2 5.006s 2025-12-04 17:28:20.001 43 INFO STARTUP <<start-node-2>> ConsistencyTestingToolMain: init called in Main for node 2.
node2 5.007s 2025-12-04 17:28:20.002 44 INFO STARTUP <<start-node-2>> SwirldsPlatform: Starting platform 2
node2 5.008s 2025-12-04 17:28:20.003 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node2 5.012s 2025-12-04 17:28:20.007 46 INFO STARTUP <<start-node-2>> CycleFinder: No cyclical back pressure detected in wiring model.
node2 5.013s 2025-12-04 17:28:20.008 47 INFO STARTUP <<start-node-2>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node2 5.013s 2025-12-04 17:28:20.008 48 INFO STARTUP <<start-node-2>> InputWireChecks: All input wires have been bound.
node2 5.015s 2025-12-04 17:28:20.010 49 WARN STARTUP <<start-node-2>> PcesFileTracker: No preconsensus event files available
node2 5.015s 2025-12-04 17:28:20.010 50 INFO STARTUP <<start-node-2>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node2 5.017s 2025-12-04 17:28:20.012 51 INFO STARTUP <<start-node-2>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node2 5.019s 2025-12-04 17:28:20.014 52 INFO STARTUP <<app: appMain 2>> ConsistencyTestingToolMain: run called in Main.
node2 5.021s 2025-12-04 17:28:20.016 53 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 174.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node2 5.025s 2025-12-04 17:28:20.020 54 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 3.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node3 5.079s 2025-12-04 17:28:20.074 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 5.081s 2025-12-04 17:28:20.076 26 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node3 5.087s 2025-12-04 17:28:20.082 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node3 5.098s 2025-12-04 17:28:20.093 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 5.101s 2025-12-04 17:28:20.096 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5.303s 2025-12-04 17:28:20.298 24 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5.306s 2025-12-04 17:28:20.301 27 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 5.312s 2025-12-04 17:28:20.307 28 INFO STARTUP <main> AddressBookInitializer: Starting from genesis: using the config address book.
node4 5.321s 2025-12-04 17:28:20.316 29 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5.324s 2025-12-04 17:28:20.319 30 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node3 6.222s 2025-12-04 17:28:21.217 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26288430] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=198240, randomLong=1529196534824568214, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=15920, randomLong=-3013231810499254307, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1395780, data=35, exception=null] OS Health Check Report - Complete (took 1029 ms)
node3 6.256s 2025-12-04 17:28:21.251 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node3 6.265s 2025-12-04 17:28:21.260 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node3 6.267s 2025-12-04 17:28:21.262 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node3 6.364s 2025-12-04 17:28:21.359 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "aMZLKQ==", "port": 30124 }, { "ipAddressV4": "CoAAYQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "I8EOdg==", "port": 30125 }, { "ipAddressV4": "CoAAYg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "IirSGg==", "port": 30126 }, { "ipAddressV4": "CoAAXw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "I+9VTg==", "port": 30127 }, { "ipAddressV4": "CoAAYw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "Ih814w==", "port": 30128 }, { "ipAddressV4": "CoAAYA==", "port": 30128 }] }] }
node3 6.391s 2025-12-04 17:28:21.386 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv
node3 6.392s 2025-12-04 17:28:21.387 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node3 6.409s 2025-12-04 17:28:21.404 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 00741c10f454d86bfb98ef6aea6349a48960475b4d4dc2acf941bcb64d429c1e178b7e234f63b44c1b23ea6d7e67a391 (root) VirtualMap state / attend-link-intact-verify {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node3 6.412s 2025-12-04 17:28:21.407 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node4 6.449s 2025-12-04 17:28:21.444 31 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26245583] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=237740, randomLong=-7191231305566691514, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=43020, randomLong=4388440834360402770, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1432380, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms)
node4 6.482s 2025-12-04 17:28:21.477 32 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 6.493s 2025-12-04 17:28:21.488 33 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 6.496s 2025-12-04 17:28:21.491 34 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 6.595s 2025-12-04 17:28:21.590 35 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "aMZLKQ==", "port": 30124 }, { "ipAddressV4": "CoAAYQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "I8EOdg==", "port": 30125 }, { "ipAddressV4": "CoAAYg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "IirSGg==", "port": 30126 }, { "ipAddressV4": "CoAAXw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "I+9VTg==", "port": 30127 }, { "ipAddressV4": "CoAAYw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "Ih814w==", "port": 30128 }, { "ipAddressV4": "CoAAYA==", "port": 30128 }] }] }
node4 6.622s 2025-12-04 17:28:21.617 36 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6.623s 2025-12-04 17:28:21.618 37 INFO STARTUP <main> TransactionHandlingHistory: No log file found. Starting without any previous history
node4 6.640s 2025-12-04 17:28:21.635 38 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 00741c10f454d86bfb98ef6aea6349a48960475b4d4dc2acf941bcb64d429c1e178b7e234f63b44c1b23ea6d7e67a391 (root) VirtualMap state / attend-link-intact-verify {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}}
node3 6.642s 2025-12-04 17:28:21.637 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node3 6.648s 2025-12-04 17:28:21.643 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node4 6.649s 2025-12-04 17:28:21.644 40 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node3 6.653s 2025-12-04 17:28:21.648 43 INFO STARTUP <<start-node-3>> ConsistencyTestingToolMain: init called in Main for node 3.
node3 6.654s 2025-12-04 17:28:21.649 44 INFO STARTUP <<start-node-3>> SwirldsPlatform: Starting platform 3
node3 6.655s 2025-12-04 17:28:21.650 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node3 6.659s 2025-12-04 17:28:21.654 46 INFO STARTUP <<start-node-3>> CycleFinder: No cyclical back pressure detected in wiring model.
node3 6.660s 2025-12-04 17:28:21.655 47 INFO STARTUP <<start-node-3>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node3 6.660s 2025-12-04 17:28:21.655 48 INFO STARTUP <<start-node-3>> InputWireChecks: All input wires have been bound.
node3 6.662s 2025-12-04 17:28:21.657 49 WARN STARTUP <<start-node-3>> PcesFileTracker: No preconsensus event files available
node3 6.662s 2025-12-04 17:28:21.657 50 INFO STARTUP <<start-node-3>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node3 6.664s 2025-12-04 17:28:21.659 51 INFO STARTUP <<start-node-3>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node3 6.665s 2025-12-04 17:28:21.660 52 INFO STARTUP <<app: appMain 3>> ConsistencyTestingToolMain: run called in Main.
node3 6.667s 2025-12-04 17:28:21.662 53 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 195.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node3 6.673s 2025-12-04 17:28:21.668 54 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 6.884s 2025-12-04 17:28:21.879 41 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b
node4 6.890s 2025-12-04 17:28:21.885 42 INFO STARTUP <platformForkJoinThread-2> Shadowgraph: Shadowgraph starting from expiration threshold 1
node4 6.895s 2025-12-04 17:28:21.890 43 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 6.895s 2025-12-04 17:28:21.890 44 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 6.897s 2025-12-04 17:28:21.892 45 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 6.900s 2025-12-04 17:28:21.895 46 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 6.901s 2025-12-04 17:28:21.896 47 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 6.902s 2025-12-04 17:28:21.897 48 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 6.904s 2025-12-04 17:28:21.899 49 WARN STARTUP <<start-node-4>> PcesFileTracker: No preconsensus event files available
node4 6.904s 2025-12-04 17:28:21.899 50 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 0
node4 6.906s 2025-12-04 17:28:21.901 51 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds.
node4 6.907s 2025-12-04 17:28:21.902 52 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 6.909s 2025-12-04 17:28:21.904 53 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 185.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 6.915s 2025-12-04 17:28:21.910 54 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node0 7.769s 2025-12-04 17:28:22.764 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ]
node0 7.773s 2025-12-04 17:28:22.768 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node1 7.869s 2025-12-04 17:28:22.864 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ]
node1 7.872s 2025-12-04 17:28:22.867 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node2 8.017s 2025-12-04 17:28:23.012 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ]
node2 8.019s 2025-12-04 17:28:23.014 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node3 9.675s 2025-12-04 17:28:24.670 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ]
node3 9.680s 2025-12-04 17:28:24.675 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 9.908s 2025-12-04 17:28:24.903 55 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 9.911s 2025-12-04 17:28:24.906 56 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node0 14.861s 2025-12-04 17:28:29.856 57 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 14.964s 2025-12-04 17:28:29.959 57 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node2 15.115s 2025-12-04 17:28:30.110 57 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node1 16.104s 2025-12-04 17:28:31.099 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node0 16.113s 2025-12-04 17:28:31.108 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node2 16.176s 2025-12-04 17:28:31.171 58 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node4 16.255s 2025-12-04 17:28:31.250 57 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node3 16.271s 2025-12-04 17:28:31.266 57 INFO STARTUP <<scheduler TransactionHandler>> DefaultTransactionHandler: Ignoring empty consensus round 1
node0 16.541s 2025-12-04 17:28:31.536 59 INFO PLATFORM_STATUS <platformForkJoinThread-7> StatusStateMachine: Platform spent 1.7 s in CHECKING. Now in ACTIVE
node1 16.543s 2025-12-04 17:28:31.538 59 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 1.6 s in CHECKING. Now in ACTIVE
node0 16.544s 2025-12-04 17:28:31.539 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node1 16.545s 2025-12-04 17:28:31.540 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node2 16.613s 2025-12-04 17:28:31.608 59 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 1.5 s in CHECKING. Now in ACTIVE
node2 16.615s 2025-12-04 17:28:31.610 61 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node4 16.685s 2025-12-04 17:28:31.680 59 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node3 16.701s 2025-12-04 17:28:31.696 59 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 2 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS
node3 16.763s 2025-12-04 17:28:31.758 74 INFO PLATFORM_STATUS <platformForkJoinThread-6> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node4 16.864s 2025-12-04 17:28:31.859 74 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node4 16.865s 2025-12-04 17:28:31.860 75 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node0 16.901s 2025-12-04 17:28:31.896 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2
node0 16.902s 2025-12-04 17:28:31.897 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node1 16.904s 2025-12-04 17:28:31.899 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2
node1 16.906s 2025-12-04 17:28:31.901 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node2 16.972s 2025-12-04 17:28:31.967 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2
node2 16.973s 2025-12-04 17:28:31.968 77 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node3 16.973s 2025-12-04 17:28:31.968 75 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 2 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2
node3 16.975s 2025-12-04 17:28:31.970 76 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node4 17.003s 2025-12-04 17:28:31.998 92 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 10.1 s in OBSERVING. Now in CHECKING
node4 17.101s 2025-12-04 17:28:32.096 109 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node4 17.105s 2025-12-04 17:28:32.100 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-04T17:28:30.595960Z Next consensus number: 10 Legacy running event hash: 2416646e71a18e12b67238cd73f0e0c307927a681d2a5b192bddb9c46705d395825edc50e07676e5ef22a0ebe17cde4b Legacy running event mnemonic: lizard-miss-crouch-differ Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 39277694a4616d43679dcc84ecfdd57226da0eb0c28ec2cf412905d49d4ea948d1a91a7f7754e66e22348cba4f354fcb (root) VirtualMap state / ceiling-spoon-initial-position {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"route-often-isolate-clap"}}}
node0 17.126s 2025-12-04 17:28:32.121 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node0 17.129s 2025-12-04 17:28:32.124 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-04T17:28:30.595960Z Next consensus number: 10 Legacy running event hash: 2416646e71a18e12b67238cd73f0e0c307927a681d2a5b192bddb9c46705d395825edc50e07676e5ef22a0ebe17cde4b Legacy running event mnemonic: lizard-miss-crouch-differ Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 39277694a4616d43679dcc84ecfdd57226da0eb0c28ec2cf412905d49d4ea948d1a91a7f7754e66e22348cba4f354fcb (root) VirtualMap state / ceiling-spoon-initial-position {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"route-often-isolate-clap"}}}
node1 17.131s 2025-12-04 17:28:32.126 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node1 17.134s 2025-12-04 17:28:32.129 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-04T17:28:30.595960Z Next consensus number: 10 Legacy running event hash: 2416646e71a18e12b67238cd73f0e0c307927a681d2a5b192bddb9c46705d395825edc50e07676e5ef22a0ebe17cde4b Legacy running event mnemonic: lizard-miss-crouch-differ Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 39277694a4616d43679dcc84ecfdd57226da0eb0c28ec2cf412905d49d4ea948d1a91a7f7754e66e22348cba4f354fcb (root) VirtualMap state / ceiling-spoon-initial-position {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"route-often-isolate-clap"}}}
node4 17.143s 2025-12-04 17:28:32.138 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces
node4 17.143s 2025-12-04 17:28:32.138 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces
node4 17.144s 2025-12-04 17:28:32.139 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 17.145s 2025-12-04 17:28:32.140 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 17.151s 2025-12-04 17:28:32.146 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 17.165s 2025-12-04 17:28:32.160 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 17.166s 2025-12-04 17:28:32.161 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 17.166s 2025-12-04 17:28:32.161 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 17.167s 2025-12-04 17:28:32.162 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 17.170s 2025-12-04 17:28:32.165 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 17.171s 2025-12-04 17:28:32.166 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 17.171s 2025-12-04 17:28:32.166 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 17.172s 2025-12-04 17:28:32.167 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 17.172s 2025-12-04 17:28:32.167 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 17.178s 2025-12-04 17:28:32.173 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 17.201s 2025-12-04 17:28:32.196 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node2 17.204s 2025-12-04 17:28:32.199 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-04T17:28:30.595960Z Next consensus number: 10 Legacy running event hash: 2416646e71a18e12b67238cd73f0e0c307927a681d2a5b192bddb9c46705d395825edc50e07676e5ef22a0ebe17cde4b Legacy running event mnemonic: lizard-miss-crouch-differ Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 39277694a4616d43679dcc84ecfdd57226da0eb0c28ec2cf412905d49d4ea948d1a91a7f7754e66e22348cba4f354fcb (root) VirtualMap state / ceiling-spoon-initial-position {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"route-often-isolate-clap"}}}
node3 17.227s 2025-12-04 17:28:32.222 110 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 2
node3 17.231s 2025-12-04 17:28:32.226 111 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 2 Timestamp: 2025-12-04T17:28:30.595960Z Next consensus number: 10 Legacy running event hash: 2416646e71a18e12b67238cd73f0e0c307927a681d2a5b192bddb9c46705d395825edc50e07676e5ef22a0ebe17cde4b Legacy running event mnemonic: lizard-miss-crouch-differ Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -799502542 Root hash: 39277694a4616d43679dcc84ecfdd57226da0eb0c28ec2cf412905d49d4ea948d1a91a7f7754e66e22348cba4f354fcb (root) VirtualMap state / ceiling-spoon-initial-position {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"sign-sword-tunnel-urge"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"route-often-isolate-clap"}}}
node2 17.237s 2025-12-04 17:28:32.232 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 17.238s 2025-12-04 17:28:32.233 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 17.238s 2025-12-04 17:28:32.233 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 17.239s 2025-12-04 17:28:32.234 117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 17.245s 2025-12-04 17:28:32.240 118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 17.271s 2025-12-04 17:28:32.266 112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 17.272s 2025-12-04 17:28:32.267 113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 1 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 17.273s 2025-12-04 17:28:32.268 114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 17.274s 2025-12-04 17:28:32.269 115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 17.281s 2025-12-04 17:28:32.276 116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 2 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2 {"round":2,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 18.931s 2025-12-04 17:28:33.926 146 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 2.2 s in CHECKING. Now in ACTIVE
node4 19.043s 2025-12-04 17:28:34.038 147 INFO PLATFORM_STATUS <platformForkJoinThread-4> StatusStateMachine: Platform spent 2.0 s in CHECKING. Now in ACTIVE
node2 46.520s 2025-12-04 17:29:01.515 769 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 65 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 46.559s 2025-12-04 17:29:01.554 790 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 65 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 46.598s 2025-12-04 17:29:01.593 776 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 65 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 46.658s 2025-12-04 17:29:01.653 792 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 65 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 46.668s 2025-12-04 17:29:01.663 767 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 65 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 46.739s 2025-12-04 17:29:01.734 796 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 65 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/65
node0 46.740s 2025-12-04 17:29:01.735 798 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 65 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/65
node1 46.740s 2025-12-04 17:29:01.735 797 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node0 46.741s 2025-12-04 17:29:01.736 799 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node1 46.821s 2025-12-04 17:29:01.816 830 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node1 46.823s 2025-12-04 17:29:01.818 831 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 65 Timestamp: 2025-12-04T17:29:00.166142Z Next consensus number: 2296 Legacy running event hash: 9888efcd7009b7c987022560c8ce05d064723896feb95f0275d9ee1332c6293022884f01a82195cdb5b2431102110a28 Legacy running event mnemonic: upper-all-abuse-alone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1866528949 Root hash: 4d963c40b30bdf585c0bcd045ba6786bde34fcbba8e4b245398d36c181bb98f43a92815cd0a94b4bd11727fd7bf58b47 (root) VirtualMap state / ready-marine-pact-bench {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"smile-peanut-desert-venture"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"marble-verify-tent-burger"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"metal-balcony-day-cushion"}}}
node0 46.829s 2025-12-04 17:29:01.824 842 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node0 46.831s 2025-12-04 17:29:01.826 843 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 65 Timestamp: 2025-12-04T17:29:00.166142Z Next consensus number: 2296 Legacy running event hash: 9888efcd7009b7c987022560c8ce05d064723896feb95f0275d9ee1332c6293022884f01a82195cdb5b2431102110a28 Legacy running event mnemonic: upper-all-abuse-alone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1866528949 Root hash: 4d963c40b30bdf585c0bcd045ba6786bde34fcbba8e4b245398d36c181bb98f43a92815cd0a94b4bd11727fd7bf58b47 (root) VirtualMap state / ready-marine-pact-bench {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"smile-peanut-desert-venture"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"marble-verify-tent-burger"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"metal-balcony-day-cushion"}}}
node1 46.833s 2025-12-04 17:29:01.828 832 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 46.834s 2025-12-04 17:29:01.829 833 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 37 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 46.834s 2025-12-04 17:29:01.829 834 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 46.836s 2025-12-04 17:29:01.831 835 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 46.837s 2025-12-04 17:29:01.832 836 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 65 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/65 {"round":65,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/65/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 46.839s 2025-12-04 17:29:01.834 773 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 65 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/65
node4 46.840s 2025-12-04 17:29:01.835 774 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node0 46.844s 2025-12-04 17:29:01.839 844 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 46.845s 2025-12-04 17:29:01.840 845 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 37 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 46.846s 2025-12-04 17:29:01.841 846 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 46.848s 2025-12-04 17:29:01.843 847 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 46.849s 2025-12-04 17:29:01.844 848 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 65 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/65 {"round":65,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/65/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 46.898s 2025-12-04 17:29:01.893 785 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 65 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/65
node2 46.899s 2025-12-04 17:29:01.894 786 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node4 46.930s 2025-12-04 17:29:01.925 817 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node4 46.933s 2025-12-04 17:29:01.928 818 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 65 Timestamp: 2025-12-04T17:29:00.166142Z Next consensus number: 2296 Legacy running event hash: 9888efcd7009b7c987022560c8ce05d064723896feb95f0275d9ee1332c6293022884f01a82195cdb5b2431102110a28 Legacy running event mnemonic: upper-all-abuse-alone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1866528949 Root hash: 4d963c40b30bdf585c0bcd045ba6786bde34fcbba8e4b245398d36c181bb98f43a92815cd0a94b4bd11727fd7bf58b47 (root) VirtualMap state / ready-marine-pact-bench {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"smile-peanut-desert-venture"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"marble-verify-tent-burger"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"metal-balcony-day-cushion"}}}
node4 46.941s 2025-12-04 17:29:01.936 819 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces
node4 46.942s 2025-12-04 17:29:01.937 820 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 37 File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces
node4 46.942s 2025-12-04 17:29:01.937 821 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 46.944s 2025-12-04 17:29:01.939 822 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 46.944s 2025-12-04 17:29:01.939 823 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 65 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/65 {"round":65,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/65/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 46.957s 2025-12-04 17:29:01.952 792 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 65 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/65
node3 46.958s 2025-12-04 17:29:01.953 793 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node2 46.977s 2025-12-04 17:29:01.972 819 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node2 46.979s 2025-12-04 17:29:01.974 820 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 65 Timestamp: 2025-12-04T17:29:00.166142Z Next consensus number: 2296 Legacy running event hash: 9888efcd7009b7c987022560c8ce05d064723896feb95f0275d9ee1332c6293022884f01a82195cdb5b2431102110a28 Legacy running event mnemonic: upper-all-abuse-alone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1866528949 Root hash: 4d963c40b30bdf585c0bcd045ba6786bde34fcbba8e4b245398d36c181bb98f43a92815cd0a94b4bd11727fd7bf58b47 (root) VirtualMap state / ready-marine-pact-bench {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"smile-peanut-desert-venture"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"marble-verify-tent-burger"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"metal-balcony-day-cushion"}}}
node2 46.987s 2025-12-04 17:29:01.982 821 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 46.988s 2025-12-04 17:29:01.983 822 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 37 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 46.988s 2025-12-04 17:29:01.983 823 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 46.990s 2025-12-04 17:29:01.985 824 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 46.991s 2025-12-04 17:29:01.986 825 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 65 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/65 {"round":65,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/65/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 47.043s 2025-12-04 17:29:02.038 826 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 65
node3 47.046s 2025-12-04 17:29:02.041 827 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 65 Timestamp: 2025-12-04T17:29:00.166142Z Next consensus number: 2296 Legacy running event hash: 9888efcd7009b7c987022560c8ce05d064723896feb95f0275d9ee1332c6293022884f01a82195cdb5b2431102110a28 Legacy running event mnemonic: upper-all-abuse-alone Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1866528949 Root hash: 4d963c40b30bdf585c0bcd045ba6786bde34fcbba8e4b245398d36c181bb98f43a92815cd0a94b4bd11727fd7bf58b47 (root) VirtualMap state / ready-marine-pact-bench {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"smile-peanut-desert-venture"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"marble-verify-tent-burger"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"metal-balcony-day-cushion"}}}
node3 47.055s 2025-12-04 17:29:02.050 828 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 47.056s 2025-12-04 17:29:02.051 829 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 37 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 47.056s 2025-12-04 17:29:02.051 830 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 47.058s 2025-12-04 17:29:02.053 831 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 47.059s 2025-12-04 17:29:02.054 832 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 65 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/65 {"round":65,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/65/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 1m 46.306s 2025-12-04 17:30:01.301 2247 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 194 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 1m 46.308s 2025-12-04 17:30:01.303 2251 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 194 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 1m 46.340s 2025-12-04 17:30:01.335 2222 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 194 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 1m 46.477s 2025-12-04 17:30:01.472 2235 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 194 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/194
node2 1m 46.477s 2025-12-04 17:30:01.472 2236 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node3 1m 46.482s 2025-12-04 17:30:01.477 2243 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 194 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 46.490s 2025-12-04 17:30:01.485 2246 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 194 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/194
node3 1m 46.491s 2025-12-04 17:30:01.486 2247 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node0 1m 46.547s 2025-12-04 17:30:01.542 2250 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 194 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/194
node0 1m 46.548s 2025-12-04 17:30:01.543 2251 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node2 1m 46.557s 2025-12-04 17:30:01.552 2267 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node2 1m 46.559s 2025-12-04 17:30:01.554 2268 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 194 Timestamp: 2025-12-04T17:30:00.371435Z Next consensus number: 7090 Legacy running event hash: 10a34f1e4ebf674abc85462c9c4b8b6d241248c58a6876739b8d972ae00f587876ba55689b1ac9a9effeb1f7517557db Legacy running event mnemonic: behave-toilet-subject-true Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 697432183 Root hash: 6afb0f37c301924b17717c2f65047aa4fecd960d2485657c09f1f84ad6929b278e5469bfdc258f20ec5691483e6b8e44 (root) VirtualMap state / garage-tragic-paper-repeat {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cheese-speed-angry-update"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"unveil-mask-host-area"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"useless-hotel-spell-measure"}}}
node2 1m 46.566s 2025-12-04 17:30:01.561 2269 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 46.567s 2025-12-04 17:30:01.562 2270 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 167 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 1m 46.567s 2025-12-04 17:30:01.562 2271 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 1m 46.572s 2025-12-04 17:30:01.567 2272 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 1m 46.572s 2025-12-04 17:30:01.567 2273 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 194 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/194 {"round":194,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/194/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 1m 46.583s 2025-12-04 17:30:01.578 2212 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 194 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 1m 46.588s 2025-12-04 17:30:01.583 2278 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node3 1m 46.590s 2025-12-04 17:30:01.585 2279 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 194 Timestamp: 2025-12-04T17:30:00.371435Z Next consensus number: 7090 Legacy running event hash: 10a34f1e4ebf674abc85462c9c4b8b6d241248c58a6876739b8d972ae00f587876ba55689b1ac9a9effeb1f7517557db Legacy running event mnemonic: behave-toilet-subject-true Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 697432183 Root hash: 6afb0f37c301924b17717c2f65047aa4fecd960d2485657c09f1f84ad6929b278e5469bfdc258f20ec5691483e6b8e44 (root) VirtualMap state / garage-tragic-paper-repeat {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cheese-speed-angry-update"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"unveil-mask-host-area"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"useless-hotel-spell-measure"}}}
node3 1m 46.597s 2025-12-04 17:30:01.592 2280 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 46.598s 2025-12-04 17:30:01.593 2281 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 167 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 1m 46.598s 2025-12-04 17:30:01.593 2282 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 1m 46.603s 2025-12-04 17:30:01.598 2283 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 1m 46.603s 2025-12-04 17:30:01.598 2284 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 194 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/194 {"round":194,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/194/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 1m 46.623s 2025-12-04 17:30:01.618 2215 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 194 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/194
node4 1m 46.624s 2025-12-04 17:30:01.619 2216 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node0 1m 46.630s 2025-12-04 17:30:01.625 2290 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node0 1m 46.632s 2025-12-04 17:30:01.627 2291 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 194 Timestamp: 2025-12-04T17:30:00.371435Z Next consensus number: 7090 Legacy running event hash: 10a34f1e4ebf674abc85462c9c4b8b6d241248c58a6876739b8d972ae00f587876ba55689b1ac9a9effeb1f7517557db Legacy running event mnemonic: behave-toilet-subject-true Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 697432183 Root hash: 6afb0f37c301924b17717c2f65047aa4fecd960d2485657c09f1f84ad6929b278e5469bfdc258f20ec5691483e6b8e44 (root) VirtualMap state / garage-tragic-paper-repeat {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cheese-speed-angry-update"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"unveil-mask-host-area"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"useless-hotel-spell-measure"}}}
node0 1m 46.639s 2025-12-04 17:30:01.634 2292 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 46.640s 2025-12-04 17:30:01.635 2293 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 167 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 1m 46.640s 2025-12-04 17:30:01.635 2294 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 1m 46.645s 2025-12-04 17:30:01.640 2295 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 1m 46.645s 2025-12-04 17:30:01.640 2296 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 194 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/194 {"round":194,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/194/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 1m 46.674s 2025-12-04 17:30:01.669 2254 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 194 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/194
node1 1m 46.675s 2025-12-04 17:30:01.670 2255 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node4 1m 46.712s 2025-12-04 17:30:01.707 2255 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node4 1m 46.715s 2025-12-04 17:30:01.710 2256 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 194 Timestamp: 2025-12-04T17:30:00.371435Z Next consensus number: 7090 Legacy running event hash: 10a34f1e4ebf674abc85462c9c4b8b6d241248c58a6876739b8d972ae00f587876ba55689b1ac9a9effeb1f7517557db Legacy running event mnemonic: behave-toilet-subject-true Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 697432183 Root hash: 6afb0f37c301924b17717c2f65047aa4fecd960d2485657c09f1f84ad6929b278e5469bfdc258f20ec5691483e6b8e44 (root) VirtualMap state / garage-tragic-paper-repeat {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cheese-speed-angry-update"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"unveil-mask-host-area"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"useless-hotel-spell-measure"}}}
node4 1m 46.723s 2025-12-04 17:30:01.718 2257 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 46.723s 2025-12-04 17:30:01.718 2258 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 167 File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces
node4 1m 46.723s 2025-12-04 17:30:01.718 2259 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 1m 46.728s 2025-12-04 17:30:01.723 2260 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 1m 46.729s 2025-12-04 17:30:01.724 2261 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 194 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/194 {"round":194,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/194/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 1m 46.752s 2025-12-04 17:30:01.747 2286 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 194
node1 1m 46.754s 2025-12-04 17:30:01.749 2287 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 194 Timestamp: 2025-12-04T17:30:00.371435Z Next consensus number: 7090 Legacy running event hash: 10a34f1e4ebf674abc85462c9c4b8b6d241248c58a6876739b8d972ae00f587876ba55689b1ac9a9effeb1f7517557db Legacy running event mnemonic: behave-toilet-subject-true Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 697432183 Root hash: 6afb0f37c301924b17717c2f65047aa4fecd960d2485657c09f1f84ad6929b278e5469bfdc258f20ec5691483e6b8e44 (root) VirtualMap state / garage-tragic-paper-repeat {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cheese-speed-angry-update"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"unveil-mask-host-area"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"useless-hotel-spell-measure"}}}
node1 1m 46.762s 2025-12-04 17:30:01.757 2288 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 46.762s 2025-12-04 17:30:01.757 2289 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 167 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 1m 46.762s 2025-12-04 17:30:01.757 2290 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 1m 46.767s 2025-12-04 17:30:01.762 2291 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 1m 46.767s 2025-12-04 17:30:01.762 2292 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 194 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/194 {"round":194,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/194/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 2m 46.178s 2025-12-04 17:31:01.173 3745 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 326 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 2m 46.187s 2025-12-04 17:31:01.182 3757 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 326 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 2m 46.229s 2025-12-04 17:31:01.224 3702 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 326 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 2m 46.257s 2025-12-04 17:31:01.252 3692 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 326 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 2m 46.276s 2025-12-04 17:31:01.271 3701 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 326 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 2m 46.375s 2025-12-04 17:31:01.370 3748 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 326 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/326
node1 2m 46.376s 2025-12-04 17:31:01.371 3749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node3 2m 46.430s 2025-12-04 17:31:01.425 3760 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 326 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/326
node3 2m 46.430s 2025-12-04 17:31:01.425 3761 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node0 2m 46.436s 2025-12-04 17:31:01.431 3704 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 326 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/326
node0 2m 46.437s 2025-12-04 17:31:01.432 3705 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node1 2m 46.454s 2025-12-04 17:31:01.449 3780 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node1 2m 46.457s 2025-12-04 17:31:01.452 3781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 326 Timestamp: 2025-12-04T17:31:00.254963972Z Next consensus number: 11885 Legacy running event hash: f37479c28460ffe28fdad6a0259586c865ca3f2022340b65511502ebe0e54682a12698af2642bd5aaaabac0a04f14978 Legacy running event mnemonic: muscle-debate-inject-adult Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1759060994 Root hash: eefff53a526395f8b27432f70333e091ecb078c623ee01b485ee69a7edcba5728d5bb1ac1fc77b52d534e69ab378b87f (root) VirtualMap state / cage-flee-erase-donate {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"excuse-song-surface-flame"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"nice-nation-else-permit"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trigger-alpha-thrive-light"}}}
node1 2m 46.464s 2025-12-04 17:31:01.459 3783 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 46.464s 2025-12-04 17:31:01.459 3791 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 299 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 2m 46.464s 2025-12-04 17:31:01.459 3792 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 2m 46.472s 2025-12-04 17:31:01.467 3793 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 2m 46.473s 2025-12-04 17:31:01.468 3794 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 326 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/326 {"round":326,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/326/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 46.495s 2025-12-04 17:31:01.490 3695 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 326 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/326
node4 2m 46.495s 2025-12-04 17:31:01.490 3696 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node3 2m 46.516s 2025-12-04 17:31:01.511 3800 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node3 2m 46.518s 2025-12-04 17:31:01.513 3801 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 326 Timestamp: 2025-12-04T17:31:00.254963972Z Next consensus number: 11885 Legacy running event hash: f37479c28460ffe28fdad6a0259586c865ca3f2022340b65511502ebe0e54682a12698af2642bd5aaaabac0a04f14978 Legacy running event mnemonic: muscle-debate-inject-adult Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1759060994 Root hash: eefff53a526395f8b27432f70333e091ecb078c623ee01b485ee69a7edcba5728d5bb1ac1fc77b52d534e69ab378b87f (root) VirtualMap state / cage-flee-erase-donate {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"excuse-song-surface-flame"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"nice-nation-else-permit"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trigger-alpha-thrive-light"}}}
node0 2m 46.521s 2025-12-04 17:31:01.516 3736 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node0 2m 46.523s 2025-12-04 17:31:01.518 3737 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 326 Timestamp: 2025-12-04T17:31:00.254963972Z Next consensus number: 11885 Legacy running event hash: f37479c28460ffe28fdad6a0259586c865ca3f2022340b65511502ebe0e54682a12698af2642bd5aaaabac0a04f14978 Legacy running event mnemonic: muscle-debate-inject-adult Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1759060994 Root hash: eefff53a526395f8b27432f70333e091ecb078c623ee01b485ee69a7edcba5728d5bb1ac1fc77b52d534e69ab378b87f (root) VirtualMap state / cage-flee-erase-donate {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"excuse-song-surface-flame"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"nice-nation-else-permit"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trigger-alpha-thrive-light"}}}
node3 2m 46.525s 2025-12-04 17:31:01.520 3802 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 46.526s 2025-12-04 17:31:01.521 3803 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 299 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 2m 46.526s 2025-12-04 17:31:01.521 3804 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 2m 46.530s 2025-12-04 17:31:01.525 3738 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 46.530s 2025-12-04 17:31:01.525 3739 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 299 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 2m 46.530s 2025-12-04 17:31:01.525 3740 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 2m 46.534s 2025-12-04 17:31:01.529 3805 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 2m 46.535s 2025-12-04 17:31:01.530 3806 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 326 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/326 {"round":326,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/326/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 2m 46.539s 2025-12-04 17:31:01.534 3741 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 2m 46.539s 2025-12-04 17:31:01.534 3742 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 326 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/326 {"round":326,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/326/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 2m 46.577s 2025-12-04 17:31:01.572 3735 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node4 2m 46.579s 2025-12-04 17:31:01.574 3736 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 326 Timestamp: 2025-12-04T17:31:00.254963972Z Next consensus number: 11885 Legacy running event hash: f37479c28460ffe28fdad6a0259586c865ca3f2022340b65511502ebe0e54682a12698af2642bd5aaaabac0a04f14978 Legacy running event mnemonic: muscle-debate-inject-adult Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1759060994 Root hash: eefff53a526395f8b27432f70333e091ecb078c623ee01b485ee69a7edcba5728d5bb1ac1fc77b52d534e69ab378b87f (root) VirtualMap state / cage-flee-erase-donate {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"excuse-song-surface-flame"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"nice-nation-else-permit"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trigger-alpha-thrive-light"}}}
node4 2m 46.587s 2025-12-04 17:31:01.582 3737 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 46.587s 2025-12-04 17:31:01.582 3738 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 299 File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces
node4 2m 46.587s 2025-12-04 17:31:01.582 3739 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 2m 46.592s 2025-12-04 17:31:01.587 3705 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 326 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/326
node2 2m 46.593s 2025-12-04 17:31:01.588 3706 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node4 2m 46.595s 2025-12-04 17:31:01.590 3740 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 2m 46.596s 2025-12-04 17:31:01.591 3741 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 326 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/326 {"round":326,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/326/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 2m 46.672s 2025-12-04 17:31:01.667 3745 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 326
node2 2m 46.674s 2025-12-04 17:31:01.669 3746 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 326 Timestamp: 2025-12-04T17:31:00.254963972Z Next consensus number: 11885 Legacy running event hash: f37479c28460ffe28fdad6a0259586c865ca3f2022340b65511502ebe0e54682a12698af2642bd5aaaabac0a04f14978 Legacy running event mnemonic: muscle-debate-inject-adult Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1759060994 Root hash: eefff53a526395f8b27432f70333e091ecb078c623ee01b485ee69a7edcba5728d5bb1ac1fc77b52d534e69ab378b87f (root) VirtualMap state / cage-flee-erase-donate {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"excuse-song-surface-flame"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"nice-nation-else-permit"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trigger-alpha-thrive-light"}}}
node2 2m 46.683s 2025-12-04 17:31:01.678 3747 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 46.683s 2025-12-04 17:31:01.678 3748 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 299 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 2m 46.683s 2025-12-04 17:31:01.678 3749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 2m 46.691s 2025-12-04 17:31:01.686 3750 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 2m 46.692s 2025-12-04 17:31:01.687 3751 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 326 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/326 {"round":326,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/326/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 3m 14.374s 2025-12-04 17:31:29.369 4388 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 0 to 4>> NetworkUtils: Connection broken: 0 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-04T17:31:29.367832511Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node2 3m 14.374s 2025-12-04 17:31:29.369 4405 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 2 to 4>> NetworkUtils: Connection broken: 2 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-04T17:31:29.366472788Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node1 3m 14.375s 2025-12-04 17:31:29.370 4442 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 1 to 4>> NetworkUtils: Connection broken: 1 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-04T17:31:29.366412191Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node3 3m 14.376s 2025-12-04 17:31:29.371 4448 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith4 3 to 4>> NetworkUtils: Connection broken: 3 -> 4
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-04T17:31:29.366647660Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node1 3m 46.117s 2025-12-04 17:32:01.112 5326 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 460 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 3m 46.138s 2025-12-04 17:32:01.133 5263 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 460 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 3m 46.146s 2025-12-04 17:32:01.141 5274 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 460 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 3m 46.186s 2025-12-04 17:32:01.181 5204 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 460 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 3m 46.289s 2025-12-04 17:32:01.284 5207 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 460 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/460
node0 3m 46.290s 2025-12-04 17:32:01.285 5208 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 460
node3 3m 46.314s 2025-12-04 17:32:01.309 5277 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 460 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/460
node3 3m 46.315s 2025-12-04 17:32:01.310 5278 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 460
node0 3m 46.369s 2025-12-04 17:32:01.364 5247 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 460
node0 3m 46.371s 2025-12-04 17:32:01.366 5248 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 460 Timestamp: 2025-12-04T17:32:00.266125Z Next consensus number: 15916 Legacy running event hash: 4425edd5f8869cd0b7538d9a3109945b629242895fd9c5864387995e6791d6146d947160ee865d41bc869a5fb6a578be Legacy running event mnemonic: lucky-earth-urban-gravity Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -692944818 Root hash: 8374b67d269b23cbef8d305156a18f374fc186931218fa441e4d3787f55107e878585547db6475119cb19a793a34681f (root) VirtualMap state / muscle-corn-square-just {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"island-clog-agree-present"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"puppy-release-exercise-envelope"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"biology-room-mistake-walnut"}}}
node0 3m 46.377s 2025-12-04 17:32:01.372 5249 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 46.378s 2025-12-04 17:32:01.373 5250 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 433 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces
node0 3m 46.378s 2025-12-04 17:32:01.373 5251 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 3m 46.389s 2025-12-04 17:32:01.384 5252 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 3m 46.389s 2025-12-04 17:32:01.384 5253 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 460 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/460 {"round":460,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/460/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 3m 46.397s 2025-12-04 17:32:01.392 5309 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 460
node3 3m 46.399s 2025-12-04 17:32:01.394 5310 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 460 Timestamp: 2025-12-04T17:32:00.266125Z Next consensus number: 15916 Legacy running event hash: 4425edd5f8869cd0b7538d9a3109945b629242895fd9c5864387995e6791d6146d947160ee865d41bc869a5fb6a578be Legacy running event mnemonic: lucky-earth-urban-gravity Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -692944818 Root hash: 8374b67d269b23cbef8d305156a18f374fc186931218fa441e4d3787f55107e878585547db6475119cb19a793a34681f (root) VirtualMap state / muscle-corn-square-just {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"island-clog-agree-present"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"puppy-release-exercise-envelope"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"biology-room-mistake-walnut"}}}
node3 3m 46.406s 2025-12-04 17:32:01.401 5311 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 46.406s 2025-12-04 17:32:01.401 5312 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 433 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 3m 46.406s 2025-12-04 17:32:01.401 5313 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 3m 46.418s 2025-12-04 17:32:01.413 5314 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 3m 46.418s 2025-12-04 17:32:01.413 5315 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 460 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/460 {"round":460,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/460/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 3m 46.480s 2025-12-04 17:32:01.475 5329 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 460 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/460
node1 3m 46.481s 2025-12-04 17:32:01.476 5330 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 460
node2 3m 46.506s 2025-12-04 17:32:01.501 5276 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 460 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/460
node2 3m 46.507s 2025-12-04 17:32:01.502 5277 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 460
node1 3m 46.563s 2025-12-04 17:32:01.558 5364 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 460
node1 3m 46.565s 2025-12-04 17:32:01.560 5365 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 460 Timestamp: 2025-12-04T17:32:00.266125Z Next consensus number: 15916 Legacy running event hash: 4425edd5f8869cd0b7538d9a3109945b629242895fd9c5864387995e6791d6146d947160ee865d41bc869a5fb6a578be Legacy running event mnemonic: lucky-earth-urban-gravity Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -692944818 Root hash: 8374b67d269b23cbef8d305156a18f374fc186931218fa441e4d3787f55107e878585547db6475119cb19a793a34681f (root) VirtualMap state / muscle-corn-square-just {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"island-clog-agree-present"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"puppy-release-exercise-envelope"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"biology-room-mistake-walnut"}}}
node1 3m 46.572s 2025-12-04 17:32:01.567 5366 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 46.573s 2025-12-04 17:32:01.568 5367 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 433 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 3m 46.573s 2025-12-04 17:32:01.568 5368 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 3m 46.584s 2025-12-04 17:32:01.579 5369 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 3m 46.584s 2025-12-04 17:32:01.579 5370 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 460 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/460 {"round":460,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/460/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 3m 46.584s 2025-12-04 17:32:01.579 5311 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 460
node2 3m 46.586s 2025-12-04 17:32:01.581 5312 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 460 Timestamp: 2025-12-04T17:32:00.266125Z Next consensus number: 15916 Legacy running event hash: 4425edd5f8869cd0b7538d9a3109945b629242895fd9c5864387995e6791d6146d947160ee865d41bc869a5fb6a578be Legacy running event mnemonic: lucky-earth-urban-gravity Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -692944818 Root hash: 8374b67d269b23cbef8d305156a18f374fc186931218fa441e4d3787f55107e878585547db6475119cb19a793a34681f (root) VirtualMap state / muscle-corn-square-just {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"island-clog-agree-present"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"puppy-release-exercise-envelope"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"biology-room-mistake-walnut"}}}
node2 3m 46.593s 2025-12-04 17:32:01.588 5313 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 46.593s 2025-12-04 17:32:01.588 5314 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 433 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces
node2 3m 46.593s 2025-12-04 17:32:01.588 5315 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 3m 46.604s 2025-12-04 17:32:01.599 5316 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 3m 46.604s 2025-12-04 17:32:01.599 5317 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 460 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/460 {"round":460,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/460/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 4m 46.336s 2025-12-04 17:33:01.331 6921 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 599 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 4m 46.369s 2025-12-04 17:33:01.364 6878 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 599 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 4m 46.386s 2025-12-04 17:33:01.381 6777 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 599 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 4m 46.418s 2025-12-04 17:33:01.413 6945 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 599 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 4m 46.513s 2025-12-04 17:33:01.508 6948 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 599 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/599
node1 4m 46.513s 2025-12-04 17:33:01.508 6949 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 599
node0 4m 46.518s 2025-12-04 17:33:01.513 6780 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 599 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/599
node0 4m 46.519s 2025-12-04 17:33:01.514 6781 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 599
node2 4m 46.583s 2025-12-04 17:33:01.578 6881 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 599 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/599
node2 4m 46.583s 2025-12-04 17:33:01.578 6882 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 599
node1 4m 46.592s 2025-12-04 17:33:01.587 6980 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 599
node1 4m 46.594s 2025-12-04 17:33:01.589 6981 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 599 Timestamp: 2025-12-04T17:33:00.423587Z Next consensus number: 19216 Legacy running event hash: 9ef2201c1c6337035573d18e395c40ecb9ab5ff1a58761a3bde4dfcee2119c03f5552a0c20702d420cd2c4e306af0d06 Legacy running event mnemonic: canyon-adjust-midnight-there Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -994860905 Root hash: bdb89fdb471069f2f42e32ca4456a9c795d3a077f6617e7a9f29d6a3752f58f44ba8c3006e409de36f2f2f3f606748ba (root) VirtualMap state / empower-again-tilt-dinner {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"blur-nuclear-stool-note"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"off-traffic-recipe-wool"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"infant-ancient-enforce-wink"}}}
node3 4m 46.598s 2025-12-04 17:33:01.593 6924 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 599 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/599
node3 4m 46.598s 2025-12-04 17:33:01.593 6925 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 599
node0 4m 46.599s 2025-12-04 17:33:01.594 6820 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 599
node0 4m 46.602s 2025-12-04 17:33:01.597 6821 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 599 Timestamp: 2025-12-04T17:33:00.423587Z Next consensus number: 19216 Legacy running event hash: 9ef2201c1c6337035573d18e395c40ecb9ab5ff1a58761a3bde4dfcee2119c03f5552a0c20702d420cd2c4e306af0d06 Legacy running event mnemonic: canyon-adjust-midnight-there Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -994860905 Root hash: bdb89fdb471069f2f42e32ca4456a9c795d3a077f6617e7a9f29d6a3752f58f44ba8c3006e409de36f2f2f3f606748ba (root) VirtualMap state / empower-again-tilt-dinner {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"blur-nuclear-stool-note"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"off-traffic-recipe-wool"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"infant-ancient-enforce-wink"}}}
node1 4m 46.603s 2025-12-04 17:33:01.598 6982 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+32+18.932327968Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 4m 46.603s 2025-12-04 17:33:01.598 6983 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 572 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+32+18.932327968Z_seq1_minr474_maxr5474_orgn0.pces
node1 4m 46.603s 2025-12-04 17:33:01.598 6984 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 4m 46.605s 2025-12-04 17:33:01.600 6985 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 4m 46.606s 2025-12-04 17:33:01.601 6986 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 599 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/599 {"round":599,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/599/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 4m 46.607s 2025-12-04 17:33:01.602 6987 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/2
node0 4m 46.610s 2025-12-04 17:33:01.605 6822 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+32+18.967805924Z_seq1_minr473_maxr5473_orgn0.pces
node0 4m 46.610s 2025-12-04 17:33:01.605 6823 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 572 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+32+18.967805924Z_seq1_minr473_maxr5473_orgn0.pces
node0 4m 46.610s 2025-12-04 17:33:01.605 6824 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 4m 46.612s 2025-12-04 17:33:01.607 6825 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 4m 46.612s 2025-12-04 17:33:01.607 6826 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 599 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/599 {"round":599,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/599/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 4m 46.614s 2025-12-04 17:33:01.609 6827 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/2
node2 4m 46.667s 2025-12-04 17:33:01.662 6921 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 599
node2 4m 46.669s 2025-12-04 17:33:01.664 6922 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 599 Timestamp: 2025-12-04T17:33:00.423587Z Next consensus number: 19216 Legacy running event hash: 9ef2201c1c6337035573d18e395c40ecb9ab5ff1a58761a3bde4dfcee2119c03f5552a0c20702d420cd2c4e306af0d06 Legacy running event mnemonic: canyon-adjust-midnight-there Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -994860905 Root hash: bdb89fdb471069f2f42e32ca4456a9c795d3a077f6617e7a9f29d6a3752f58f44ba8c3006e409de36f2f2f3f606748ba (root) VirtualMap state / empower-again-tilt-dinner {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"blur-nuclear-stool-note"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"off-traffic-recipe-wool"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"infant-ancient-enforce-wink"}}}
node2 4m 46.675s 2025-12-04 17:33:01.670 6923 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+32+18.973803562Z_seq1_minr474_maxr5474_orgn0.pces
node2 4m 46.676s 2025-12-04 17:33:01.671 6924 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 572 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+32+18.973803562Z_seq1_minr474_maxr5474_orgn0.pces
node2 4m 46.676s 2025-12-04 17:33:01.671 6925 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 4m 46.677s 2025-12-04 17:33:01.672 6964 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 599
node2 4m 46.678s 2025-12-04 17:33:01.673 6926 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 4m 46.678s 2025-12-04 17:33:01.673 6927 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 599 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/599 {"round":599,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/599/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 4m 46.679s 2025-12-04 17:33:01.674 6965 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 599 Timestamp: 2025-12-04T17:33:00.423587Z Next consensus number: 19216 Legacy running event hash: 9ef2201c1c6337035573d18e395c40ecb9ab5ff1a58761a3bde4dfcee2119c03f5552a0c20702d420cd2c4e306af0d06 Legacy running event mnemonic: canyon-adjust-midnight-there Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -994860905 Root hash: bdb89fdb471069f2f42e32ca4456a9c795d3a077f6617e7a9f29d6a3752f58f44ba8c3006e409de36f2f2f3f606748ba (root) VirtualMap state / empower-again-tilt-dinner {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"blur-nuclear-stool-note"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"off-traffic-recipe-wool"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"infant-ancient-enforce-wink"}}}
node2 4m 46.680s 2025-12-04 17:33:01.675 6928 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/2
node3 4m 46.686s 2025-12-04 17:33:01.681 6966 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+32+18.881235278Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 4m 46.686s 2025-12-04 17:33:01.681 6967 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 572 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+32+18.881235278Z_seq1_minr474_maxr5474_orgn0.pces
node3 4m 46.686s 2025-12-04 17:33:01.681 6968 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 4m 46.688s 2025-12-04 17:33:01.683 6969 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 4m 46.688s 2025-12-04 17:33:01.683 6970 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 599 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/599 {"round":599,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/599/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 4m 46.690s 2025-12-04 17:33:01.685 6971 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/2
node2 5m 45.859s 2025-12-04 17:34:00.854 8465 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 737 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 5m 45.936s 2025-12-04 17:34:00.931 8344 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 737 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 5m 45.958s 2025-12-04 17:34:00.953 8518 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 737 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 5m 45.986s 2025-12-04 17:34:00.981 8530 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 737 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 5m 46.124s 2025-12-04 17:34:01.119 8533 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 737 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/737
node3 5m 46.124s 2025-12-04 17:34:01.119 8534 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 737
node1 5m 46.138s 2025-12-04 17:34:01.133 8521 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 737 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/737
node1 5m 46.139s 2025-12-04 17:34:01.134 8522 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 737
node0 5m 46.168s 2025-12-04 17:34:01.163 8347 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 737 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/737
node0 5m 46.169s 2025-12-04 17:34:01.164 8348 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 737
node3 5m 46.200s 2025-12-04 17:34:01.195 8565 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 737
node3 5m 46.203s 2025-12-04 17:34:01.198 8566 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 737 Timestamp: 2025-12-04T17:34:00.016159952Z Next consensus number: 22529 Legacy running event hash: a883ecf2f5e6544b92a148446cdb2b3cfe5fa75b9523e03d73eeeb6564b25f2f44b7ab9817bdfbe2a6a9a30770584ec3 Legacy running event mnemonic: use-gift-drive-maximum Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1677216484 Root hash: 23e3b9c2993097a003fb79f96b9ac6869f0f3acf898e2245734cbbb08aeaf824b5c4f32a16ac649fee3705b2e89416c9 (root) VirtualMap state / ladder-sick-sure-army {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"churn-flame-denial-siren"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"quit-bubble-below-blast"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"color-hip-tennis-visit"}}}
node3 5m 46.208s 2025-12-04 17:34:01.203 8567 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+32+18.881235278Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 5m 46.208s 2025-12-04 17:34:01.203 8568 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 710 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+32+18.881235278Z_seq1_minr474_maxr5474_orgn0.pces
node3 5m 46.208s 2025-12-04 17:34:01.203 8569 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 5m 46.213s 2025-12-04 17:34:01.208 8570 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 5m 46.213s 2025-12-04 17:34:01.208 8571 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 737 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/737 {"round":737,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/737/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 5m 46.214s 2025-12-04 17:34:01.209 8572 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/65
node1 5m 46.215s 2025-12-04 17:34:01.210 8553 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 737
node1 5m 46.217s 2025-12-04 17:34:01.212 8554 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 737 Timestamp: 2025-12-04T17:34:00.016159952Z Next consensus number: 22529 Legacy running event hash: a883ecf2f5e6544b92a148446cdb2b3cfe5fa75b9523e03d73eeeb6564b25f2f44b7ab9817bdfbe2a6a9a30770584ec3 Legacy running event mnemonic: use-gift-drive-maximum Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1677216484 Root hash: 23e3b9c2993097a003fb79f96b9ac6869f0f3acf898e2245734cbbb08aeaf824b5c4f32a16ac649fee3705b2e89416c9 (root) VirtualMap state / ladder-sick-sure-army {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"churn-flame-denial-siren"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"quit-bubble-below-blast"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"color-hip-tennis-visit"}}}
node1 5m 46.224s 2025-12-04 17:34:01.219 8555 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+32+18.932327968Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 5m 46.225s 2025-12-04 17:34:01.220 8556 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 710 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+32+18.932327968Z_seq1_minr474_maxr5474_orgn0.pces
node1 5m 46.225s 2025-12-04 17:34:01.220 8557 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 5m 46.227s 2025-12-04 17:34:01.222 8468 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 737 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/737
node2 5m 46.228s 2025-12-04 17:34:01.223 8469 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 737
node1 5m 46.229s 2025-12-04 17:34:01.224 8558 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 5m 46.230s 2025-12-04 17:34:01.225 8559 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 737 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/737 {"round":737,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/737/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 5m 46.231s 2025-12-04 17:34:01.226 8560 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/65
node0 5m 46.248s 2025-12-04 17:34:01.243 8379 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 737
node0 5m 46.251s 2025-12-04 17:34:01.246 8380 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 737 Timestamp: 2025-12-04T17:34:00.016159952Z Next consensus number: 22529 Legacy running event hash: a883ecf2f5e6544b92a148446cdb2b3cfe5fa75b9523e03d73eeeb6564b25f2f44b7ab9817bdfbe2a6a9a30770584ec3 Legacy running event mnemonic: use-gift-drive-maximum Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1677216484 Root hash: 23e3b9c2993097a003fb79f96b9ac6869f0f3acf898e2245734cbbb08aeaf824b5c4f32a16ac649fee3705b2e89416c9 (root) VirtualMap state / ladder-sick-sure-army {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"churn-flame-denial-siren"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"quit-bubble-below-blast"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"color-hip-tennis-visit"}}}
node0 5m 46.258s 2025-12-04 17:34:01.253 8381 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+32+18.967805924Z_seq1_minr473_maxr5473_orgn0.pces
node0 5m 46.258s 2025-12-04 17:34:01.253 8382 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 710 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+32+18.967805924Z_seq1_minr473_maxr5473_orgn0.pces
node0 5m 46.258s 2025-12-04 17:34:01.253 8383 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 5m 46.263s 2025-12-04 17:34:01.258 8384 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 5m 46.263s 2025-12-04 17:34:01.258 8385 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 737 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/737 {"round":737,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/737/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 5m 46.264s 2025-12-04 17:34:01.259 8386 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/65
node2 5m 46.304s 2025-12-04 17:34:01.299 8503 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 737
node2 5m 46.306s 2025-12-04 17:34:01.301 8504 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 737 Timestamp: 2025-12-04T17:34:00.016159952Z Next consensus number: 22529 Legacy running event hash: a883ecf2f5e6544b92a148446cdb2b3cfe5fa75b9523e03d73eeeb6564b25f2f44b7ab9817bdfbe2a6a9a30770584ec3 Legacy running event mnemonic: use-gift-drive-maximum Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1677216484 Root hash: 23e3b9c2993097a003fb79f96b9ac6869f0f3acf898e2245734cbbb08aeaf824b5c4f32a16ac649fee3705b2e89416c9 (root) VirtualMap state / ladder-sick-sure-army {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"churn-flame-denial-siren"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"quit-bubble-below-blast"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"color-hip-tennis-visit"}}}
node2 5m 46.312s 2025-12-04 17:34:01.307 8505 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+32+18.973803562Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 46.313s 2025-12-04 17:34:01.308 8506 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 710 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+32+18.973803562Z_seq1_minr474_maxr5474_orgn0.pces
node2 5m 46.313s 2025-12-04 17:34:01.308 8507 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 5m 46.317s 2025-12-04 17:34:01.312 8508 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 5m 46.317s 2025-12-04 17:34:01.312 8509 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 737 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/737 {"round":737,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/737/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 5m 46.319s 2025-12-04 17:34:01.314 8510 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/65
node4 5m 55.467s 2025-12-04 17:34:10.462 1 INFO STARTUP <main> StaticPlatformBuilder:
////////////////////// // Node is Starting // //////////////////////
node4 5m 55.557s 2025-12-04 17:34:10.552 2 DEBUG STARTUP <main> StaticPlatformBuilder: main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload]
node4 5m 55.573s 2025-12-04 17:34:10.568 3 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 55.685s 2025-12-04 17:34:10.680 4 INFO STARTUP <main> Browser: The following nodes [4] are set to run locally
node4 5m 55.712s 2025-12-04 17:34:10.707 5 DEBUG STARTUP <main> BootstrapUtils: Scanning the classpath for RuntimeConstructable classes
node4 5m 56.950s 2025-12-04 17:34:11.945 6 DEBUG STARTUP <main> BootstrapUtils: Done with registerConstructables, time taken 1236ms
node4 5m 56.960s 2025-12-04 17:34:11.955 7 INFO STARTUP <main> ConsistencyTestingToolMain: constructor called in Main.
node4 5m 56.964s 2025-12-04 17:34:11.959 8 WARN STARTUP <main> PlatformConfigUtils: Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name.
node4 5m 57.004s 2025-12-04 17:34:11.999 9 INFO STARTUP <main> PrometheusEndpoint: PrometheusEndpoint: Starting server listing on port: 9999
node4 5m 57.069s 2025-12-04 17:34:12.064 10 WARN STARTUP <main> CryptoStatic: There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB.
node4 5m 57.070s 2025-12-04 17:34:12.065 11 DEBUG STARTUP <main> CryptoStatic: Started generating keys
node4 5m 57.894s 2025-12-04 17:34:12.889 12 DEBUG STARTUP <main> CryptoStatic: Done generating keys
node4 5m 57.999s 2025-12-04 17:34:12.994 15 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 58.008s 2025-12-04 17:34:13.003 16 INFO STARTUP <main> StartupStateUtils: The following saved states were found on disk:
- /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/326 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/194 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/65 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node4 5m 58.009s 2025-12-04 17:34:13.004 17 INFO STARTUP <main> StartupStateUtils: Loading latest state from disk.
node4 5m 58.009s 2025-12-04 17:34:13.004 18 INFO STARTUP <main> StartupStateUtils: Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/326
node4 5m 58.019s 2025-12-04 17:34:13.014 19 INFO STATE_TO_DISK <main> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp
node4 5m 58.143s 2025-12-04 17:34:13.138 29 INFO STARTUP <main> ConsistencyTestingToolState: New State Constructed.
node4 5m 58.940s 2025-12-04 17:34:13.935 31 INFO STARTUP <main> StartupStateUtils: Loaded state's hash is the same as when it was saved.
node4 5m 58.946s 2025-12-04 17:34:13.941 32 INFO STARTUP <main> StartupStateUtils: Platform has loaded a saved state {"round":326,"consensusTimestamp":"2025-12-04T17:31:00.254963972Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload]
node4 5m 58.950s 2025-12-04 17:34:13.945 35 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 58.951s 2025-12-04 17:34:13.946 37 INFO STARTUP <main> BootstrapUtils: Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=].
node4 5m 58.955s 2025-12-04 17:34:13.950 39 INFO STARTUP <main> AddressBookInitializer: Using the loaded state's address book and weight values.
node4 5m 58.963s 2025-12-04 17:34:13.958 40 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 5m 58.966s 2025-12-04 17:34:13.961 41 INFO STARTUP <main> ConsistencyTestingToolMain: returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]
node4 6.001m 2025-12-04 17:34:15.043 42 INFO STARTUP <main> OSHealthChecker:
PASSED - Clock Source Speed Check Report[callsPerSec=26342178] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=270700, randomLong=-5414432435837375306, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11620, randomLong=5339266949226880737, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1432930, data=35, exception=null] OS Health Check Report - Complete (took 1023 ms)
node4 6.001m 2025-12-04 17:34:15.075 43 DEBUG STARTUP <main> BootstrapUtils: jvmPauseDetectorThread started
node4 6.003m 2025-12-04 17:34:15.204 44 INFO STARTUP <main> PcesUtilities: Span compaction completed for data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 386
node4 6.004m 2025-12-04 17:34:15.206 45 INFO STARTUP <main> StandardScratchpad: Scratchpad platform.iss contents:
LAST_ISS_ROUND null
node4 6.004m 2025-12-04 17:34:15.208 46 INFO STARTUP <main> PlatformBuilder: Default platform pool parallelism: 8
node4 6.005m 2025-12-04 17:34:15.294 47 INFO STARTUP <main> SwirldsPlatform: Starting with roster history:
RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "aMZLKQ==", "port": 30124 }, { "ipAddressV4": "CoAAYQ==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "I8EOdg==", "port": 30125 }, { "ipAddressV4": "CoAAYg==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "IirSGg==", "port": 30126 }, { "ipAddressV4": "CoAAXw==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "I+9VTg==", "port": 30127 }, { "ipAddressV4": "CoAAYw==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "Ih814w==", "port": 30128 }, { "ipAddressV4": "CoAAYA==", "port": 30128 }] }] }
node4 6.005m 2025-12-04 17:34:15.318 48 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with state long -6842016838799218383.
node4 6.005m 2025-12-04 17:34:15.319 49 INFO STARTUP <main> ConsistencyTestingToolState: State initialized with 325 rounds handled.
node4 6.005m 2025-12-04 17:34:15.319 50 INFO STARTUP <main> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6.005m 2025-12-04 17:34:15.319 51 INFO STARTUP <main> TransactionHandlingHistory: Log file found. Parsing previous history
node4 6.006m 2025-12-04 17:34:15.365 52 INFO STARTUP <main> StateInitializer: The platform is using the following initial state:
Round: 326 Timestamp: 2025-12-04T17:31:00.254963972Z Next consensus number: 11885 Legacy running event hash: f37479c28460ffe28fdad6a0259586c865ca3f2022340b65511502ebe0e54682a12698af2642bd5aaaabac0a04f14978 Legacy running event mnemonic: muscle-debate-inject-adult Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1759060994 Root hash: eefff53a526395f8b27432f70333e091ecb078c623ee01b485ee69a7edcba5728d5bb1ac1fc77b52d534e69ab378b87f (root) VirtualMap state / cage-flee-erase-donate {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"excuse-song-surface-flame"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"nice-nation-else-permit"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trigger-alpha-thrive-light"}}}
node4 6.006m 2025-12-04 17:34:15.371 54 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Starting the ReconnectController
node4 6.009m 2025-12-04 17:34:15.558 55 INFO EVENT_STREAM <main> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: f37479c28460ffe28fdad6a0259586c865ca3f2022340b65511502ebe0e54682a12698af2642bd5aaaabac0a04f14978
node4 6.010m 2025-12-04 17:34:15.567 56 INFO STARTUP <platformForkJoinThread-4> Shadowgraph: Shadowgraph starting from expiration threshold 299
node4 6.010m 2025-12-04 17:34:15.571 58 INFO STARTUP <<start-node-4>> ConsistencyTestingToolMain: init called in Main for node 4.
node4 6.010m 2025-12-04 17:34:15.572 59 INFO STARTUP <<start-node-4>> SwirldsPlatform: Starting platform 4
node4 6.010m 2025-12-04 17:34:15.573 60 INFO STARTUP <<platform: recycle-bin-cleanup>> RecycleBinImpl: Deleted 0 files from the recycle bin.
node4 6.010m 2025-12-04 17:34:15.577 61 INFO STARTUP <<start-node-4>> CycleFinder: No cyclical back pressure detected in wiring model.
node4 6.010m 2025-12-04 17:34:15.578 62 INFO STARTUP <<start-node-4>> DirectSchedulerChecks: No illegal direct scheduler use detected in the wiring model.
node4 6.010m 2025-12-04 17:34:15.579 63 INFO STARTUP <<start-node-4>> InputWireChecks: All input wires have been bound.
node4 6.010m 2025-12-04 17:34:15.580 64 INFO STARTUP <<start-node-4>> SwirldsPlatform: replaying preconsensus event stream starting at 299
node4 6.010m 2025-12-04 17:34:15.586 65 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 159.0 ms in STARTING_UP. Now in REPLAYING_EVENTS
node4 6.014m 2025-12-04 17:34:15.857 66 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:4 H:676b098d1861 BR:324), num remaining: 4
node4 6.014m 2025-12-04 17:34:15.860 67 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:1caf46b032d5 BR:324), num remaining: 3
node4 6.014m 2025-12-04 17:34:15.860 68 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:d7062a32db34 BR:325), num remaining: 2
node4 6.014m 2025-12-04 17:34:15.861 69 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:3f5798bbeff5 BR:325), num remaining: 1
node4 6.014m 2025-12-04 17:34:15.861 70 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:e0c1b5f4a7bf BR:324), num remaining: 0
node4 6m 1.321s 2025-12-04 17:34:16.316 502 INFO STARTUP <<start-node-4>> PcesReplayer: Replayed 3,189 preconsensus events with max birth round 386. These events contained 4,446 transactions. 59 rounds reached consensus spanning 27.4 seconds of consensus time. The latest round to reach consensus is round 385. Replay took 735.0 milliseconds.
node4 6m 1.325s 2025-12-04 17:34:16.320 503 INFO STARTUP <<app: appMain 4>> ConsistencyTestingToolMain: run called in Main.
node4 6m 1.326s 2025-12-04 17:34:16.321 504 INFO PLATFORM_STATUS <platformForkJoinThread-1> StatusStateMachine: Platform spent 731.0 ms in REPLAYING_EVENTS. Now in OBSERVING
node4 6m 2.206s 2025-12-04 17:34:17.201 611 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Preparing for reconnect, stopping gossip
node4 6m 2.206s 2025-12-04 17:34:17.201 613 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=385,newEventBirthRound=386,ancientThreshold=358,expiredThreshold=299] remote ev=EventWindow[latestConsensusRound=774,newEventBirthRound=775,ancientThreshold=747,expiredThreshold=673]
node4 6m 2.206s 2025-12-04 17:34:17.201 614 INFO RECONNECT <<platform-core: SyncProtocolWith1 4 to 1>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=385,newEventBirthRound=386,ancientThreshold=358,expiredThreshold=299] remote ev=EventWindow[latestConsensusRound=774,newEventBirthRound=775,ancientThreshold=747,expiredThreshold=673]
node4 6m 2.206s 2025-12-04 17:34:17.201 615 INFO RECONNECT <<platform-core: SyncProtocolWith0 4 to 0>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=385,newEventBirthRound=386,ancientThreshold=358,expiredThreshold=299] remote ev=EventWindow[latestConsensusRound=774,newEventBirthRound=775,ancientThreshold=747,expiredThreshold=673]
node4 6m 2.206s 2025-12-04 17:34:17.201 612 INFO RECONNECT <<platform-core: SyncProtocolWith3 4 to 3>> RpcPeerHandler: SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=385,newEventBirthRound=386,ancientThreshold=358,expiredThreshold=299] remote ev=EventWindow[latestConsensusRound=774,newEventBirthRound=775,ancientThreshold=747,expiredThreshold=673]
node4 6m 2.207s 2025-12-04 17:34:17.202 616 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 879.0 ms in OBSERVING. Now in BEHIND
node1 6m 2.276s 2025-12-04 17:34:17.271 8976 INFO RECONNECT <<platform-core: SyncProtocolWith4 1 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=774,newEventBirthRound=775,ancientThreshold=747,expiredThreshold=673] remote ev=EventWindow[latestConsensusRound=385,newEventBirthRound=386,ancientThreshold=358,expiredThreshold=299]
node2 6m 2.276s 2025-12-04 17:34:17.271 8961 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=774,newEventBirthRound=775,ancientThreshold=747,expiredThreshold=673] remote ev=EventWindow[latestConsensusRound=385,newEventBirthRound=386,ancientThreshold=358,expiredThreshold=299]
node3 6m 2.276s 2025-12-04 17:34:17.271 8988 INFO RECONNECT <<platform-core: SyncProtocolWith4 3 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=774,newEventBirthRound=775,ancientThreshold=747,expiredThreshold=673] remote ev=EventWindow[latestConsensusRound=385,newEventBirthRound=386,ancientThreshold=358,expiredThreshold=299]
node0 6m 2.277s 2025-12-04 17:34:17.272 8800 INFO RECONNECT <<platform-core: SyncProtocolWith4 0 to 4>> RpcPeerHandler: OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=774,newEventBirthRound=775,ancientThreshold=747,expiredThreshold=673] remote ev=EventWindow[latestConsensusRound=385,newEventBirthRound=386,ancientThreshold=358,expiredThreshold=299]
node4 6m 2.358s 2025-12-04 17:34:17.353 617 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Preparing for reconnect, start clearing queues
node4 6m 2.360s 2025-12-04 17:34:17.355 618 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Queues have been cleared
node4 6m 2.360s 2025-12-04 17:34:17.355 619 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Waiting for a state to be obtained from a peer
node2 6m 2.453s 2025-12-04 17:34:17.448 8975 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectStateTeacher: Starting reconnect in the role of the sender {"receiving":false,"nodeId":2,"otherNodeId":4,"round":774} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node2 6m 2.454s 2025-12-04 17:34:17.449 8976 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectStateTeacher: The following state will be sent to the learner:
Round: 774 Timestamp: 2025-12-04T17:34:16.082974Z Next consensus number: 23422 Legacy running event hash: 85e3931081d07289a2089c3fb0776f83b2e3b1eb24f3f5332ef01323d4024ef1dbbb3e4efc8cf0feff3b3c4f59191a1c Legacy running event mnemonic: dry-own-ship-eye Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1898516850 Root hash: 8dc5a7a18c1d30df2c74a605e3009ffd66276eded005a550428cee7269539755a97420bc6289a87502056b7f000ce1e0 (root) VirtualMap state / electric-armor-ladder-brief {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"large-brave-wealth-combine"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"reform-ethics-grief-attend"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"jar-current-absorb-wrong"}}}
node2 6m 2.454s 2025-12-04 17:34:17.449 8977 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectStateTeacher: Sending signatures from nodes 0, 2, 3 (signing weight = 37500000000/50000000000) for state hash 8dc5a7a18c1d30df2c74a605e3009ffd66276eded005a550428cee7269539755a97420bc6289a87502056b7f000ce1e0
node2 6m 2.455s 2025-12-04 17:34:17.450 8978 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectStateTeacher: Starting synchronization in the role of the sender.
node4 6m 2.524s 2025-12-04 17:34:17.519 620 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> ReconnectStatePeerProtocol: Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":385} [com.swirlds.logging.legacy.payload.ReconnectStartPayload]
node4 6m 2.525s 2025-12-04 17:34:17.520 621 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> ReconnectStateLearner: Receiving signed state signatures
node4 6m 2.525s 2025-12-04 17:34:17.520 622 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> ReconnectStateLearner: Received signatures from nodes 0, 2, 3
node2 6m 2.576s 2025-12-04 17:34:17.571 8994 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: sending tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node2 6m 2.584s 2025-12-04 17:34:17.579 8995 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3a05eafb start run()
node4 6m 2.735s 2025-12-04 17:34:17.730 649 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: learner calls receiveTree()
node4 6m 2.736s 2025-12-04 17:34:17.731 650 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: synchronizing tree
node4 6m 2.736s 2025-12-04 17:34:17.731 651 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node4 6m 2.743s 2025-12-04 17:34:17.738 652 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@58dc45e7 start run()
node4 6m 2.801s 2025-12-04 17:34:17.796 653 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): firstLeafPath: 4 -> 4, lastLeafPath: 8 -> 8
node4 6m 2.802s 2025-12-04 17:34:17.797 654 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> ReconnectNodeRemover: setPathInformation(): done
node4 6m 2.955s 2025-12-04 17:34:17.950 655 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread finished the learning loop for the current subtree
node4 6m 2.956s 2025-12-04 17:34:17.951 656 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call nodeRemover.allNodesReceived()
node4 6m 2.956s 2025-12-04 17:34:17.951 657 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived()
node4 6m 2.956s 2025-12-04 17:34:17.951 658 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> ReconnectNodeRemover: allNodesReceived(): done
node4 6m 2.956s 2025-12-04 17:34:17.951 659 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: call root.endLearnerReconnect()
node4 6m 2.957s 2025-12-04 17:34:17.952 660 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call reconnectIterator.close()
node4 6m 2.957s 2025-12-04 17:34:17.952 661 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call setHashPrivate()
node4 6m 2.979s 2025-12-04 17:34:17.974 671 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: call postInit()
node4 6m 2.979s 2025-12-04 17:34:17.974 673 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> VirtualMap: endLearnerReconnect() complete
node4 6m 2.980s 2025-12-04 17:34:17.975 674 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushVirtualTreeView: close() complete
node4 6m 2.980s 2025-12-04 17:34:17.975 675 INFO RECONNECT <<work group learning-synchronizer: learner-task #2>> LearnerPushTask: learner thread closed input, output, and view for the current subtree
node4 6m 2.981s 2025-12-04 17:34:17.976 676 INFO RECONNECT <<work group learning-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@58dc45e7 finish run()
node4 6m 2.982s 2025-12-04 17:34:17.977 677 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: received tree rooted at com.swirlds.virtualmap.VirtualMap with route []
node4 6m 2.982s 2025-12-04 17:34:17.977 678 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: synchronization complete
node4 6m 2.983s 2025-12-04 17:34:17.978 679 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: learner calls initialize()
node4 6m 2.983s 2025-12-04 17:34:17.978 680 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: initializing tree
node4 6m 2.983s 2025-12-04 17:34:17.978 681 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: initialization complete
node4 6m 2.983s 2025-12-04 17:34:17.978 682 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: learner calls hash()
node4 6m 2.984s 2025-12-04 17:34:17.979 683 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: hashing tree
node4 6m 2.984s 2025-12-04 17:34:17.979 684 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: hashing complete
node4 6m 2.984s 2025-12-04 17:34:17.979 685 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: learner calls logStatistics()
node4 6m 2.987s 2025-12-04 17:34:17.982 686 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: Finished synchronization {"timeInSeconds":0.246,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":9,"leafNodes":5,"redundantLeafNodes":2,"internalNodes":4,"redundantInternalNodes":0} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload]
node4 6m 2.987s 2025-12-04 17:34:17.982 687 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: ReconnectMapMetrics: transfersFromTeacher=9; transfersFromLearner=8; internalHashes=3; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=5; leafCleanHashes=2; leafData=5; leafCleanData=2
node4 6m 2.987s 2025-12-04 17:34:17.982 688 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> LearningSynchronizer: learner is done synchronizing
node4 6m 2.988s 2025-12-04 17:34:17.983 689 INFO STARTUP <<platform-core: SyncProtocolWith2 4 to 2>> ConsistencyTestingToolState: New State Constructed.
node4 6m 2.993s 2025-12-04 17:34:17.988 690 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> ReconnectStateLearner: Reconnect data usage report {"dataMegabytes":0.005863189697265625} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload]
node2 6m 3.016s 2025-12-04 17:34:18.011 9009 INFO RECONNECT <<work group teaching-synchronizer: async-input-stream #0>> AsyncInputStream: com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@3a05eafb finish run()
node2 6m 3.017s 2025-12-04 17:34:18.012 9010 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> TeachingSynchronizer: finished sending tree
node2 6m 3.019s 2025-12-04 17:34:18.014 9013 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectStateTeacher: Finished synchronization in the role of the sender.
node2 6m 3.065s 2025-12-04 17:34:18.060 9014 INFO RECONNECT <<platform-core: SyncProtocolWith4 2 to 4>> ReconnectStateTeacher: Finished reconnect in the role of the sender. {"receiving":false,"nodeId":2,"otherNodeId":4,"round":774} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6m 3.093s 2025-12-04 17:34:18.088 691 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> ReconnectStatePeerProtocol: Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":774} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload]
node4 6m 3.094s 2025-12-04 17:34:18.089 692 INFO RECONNECT <<platform-core: SyncProtocolWith2 4 to 2>> ReconnectStatePeerProtocol: Information for state received during reconnect:
Round: 774 Timestamp: 2025-12-04T17:34:16.082974Z Next consensus number: 23422 Legacy running event hash: 85e3931081d07289a2089c3fb0776f83b2e3b1eb24f3f5332ef01323d4024ef1dbbb3e4efc8cf0feff3b3c4f59191a1c Legacy running event mnemonic: dry-own-ship-eye Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1898516850 Root hash: 8dc5a7a18c1d30df2c74a605e3009ffd66276eded005a550428cee7269539755a97420bc6289a87502056b7f000ce1e0 (root) VirtualMap state / electric-armor-ladder-brief {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"jar-current-absorb-wrong"}}}
node4 6m 3.095s 2025-12-04 17:34:18.090 693 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: A state was obtained from a peer
node4 6m 3.096s 2025-12-04 17:34:18.091 694 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: The state obtained from a peer was validated
node4 6m 3.097s 2025-12-04 17:34:18.092 696 DEBUG RECONNECT <<platform-core: reconnectController>> ReconnectController: `loadState` : reloading state
node4 6m 3.097s 2025-12-04 17:34:18.092 697 INFO STARTUP <<platform-core: reconnectController>> ConsistencyTestingToolState: State initialized with state long 8344434620688507588.
node4 6m 3.098s 2025-12-04 17:34:18.093 698 INFO STARTUP <<platform-core: reconnectController>> ConsistencyTestingToolState: State initialized with 773 rounds handled.
node4 6m 3.098s 2025-12-04 17:34:18.093 699 INFO STARTUP <<platform-core: reconnectController>> TransactionHandlingHistory: Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv
node4 6m 3.098s 2025-12-04 17:34:18.093 700 INFO STARTUP <<platform-core: reconnectController>> TransactionHandlingHistory: Log file found. Parsing previous history
node4 6m 3.123s 2025-12-04 17:34:18.118 705 INFO STATE_TO_DISK <<platform-core: reconnectController>> DefaultSavedStateController: Signed state from round 774 created, will eventually be written to disk, for reason: RECONNECT
node4 6m 3.123s 2025-12-04 17:34:18.118 706 INFO PLATFORM_STATUS <platformForkJoinThread-5> StatusStateMachine: Platform spent 915.0 ms in BEHIND. Now in RECONNECT_COMPLETE
node4 6m 3.124s 2025-12-04 17:34:18.119 708 INFO STARTUP <platformForkJoinThread-1> Shadowgraph: Shadowgraph starting from expiration threshold 747
node4 6m 3.127s 2025-12-04 17:34:18.122 710 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 774 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/774
node4 6m 3.129s 2025-12-04 17:34:18.124 711 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for 774
node4 6m 3.133s 2025-12-04 17:34:18.128 713 INFO EVENT_STREAM <<platform-core: reconnectController>> DefaultConsensusEventStream: EventStreamManager::updateRunningHash: 85e3931081d07289a2089c3fb0776f83b2e3b1eb24f3f5332ef01323d4024ef1dbbb3e4efc8cf0feff3b3c4f59191a1c
node4 6m 3.134s 2025-12-04 17:34:18.129 717 INFO STARTUP <platformForkJoinThread-6> PcesFileManager: Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr386_orgn0.pces. All future files will have an origin round of 774.
node4 6m 3.135s 2025-12-04 17:34:18.130 719 INFO RECONNECT <<platform-core: reconnectController>> ReconnectController: Reconnect almost done resuming gossip
node4 6m 3.271s 2025-12-04 17:34:18.266 749 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for 774
node4 6m 3.274s 2025-12-04 17:34:18.269 750 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 774 Timestamp: 2025-12-04T17:34:16.082974Z Next consensus number: 23422 Legacy running event hash: 85e3931081d07289a2089c3fb0776f83b2e3b1eb24f3f5332ef01323d4024ef1dbbb3e4efc8cf0feff3b3c4f59191a1c Legacy running event mnemonic: dry-own-ship-eye Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1898516850 Root hash: 8dc5a7a18c1d30df2c74a605e3009ffd66276eded005a550428cee7269539755a97420bc6289a87502056b7f000ce1e0 (root) VirtualMap state / electric-armor-ladder-brief {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"large-brave-wealth-combine"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"reform-ethics-grief-attend"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"jar-current-absorb-wrong"}}}
node4 6m 3.306s 2025-12-04 17:34:18.301 751 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus file on disk.
File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr386_orgn0.pces
node4 6m 3.306s 2025-12-04 17:34:18.301 752 WARN STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: No preconsensus event files meeting specified criteria found to copy. Lower bound: 747
node4 6m 3.311s 2025-12-04 17:34:18.306 753 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 774 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/774 {"round":774,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/774/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 3.314s 2025-12-04 17:34:18.309 754 INFO PLATFORM_STATUS <platformForkJoinThread-2> StatusStateMachine: Platform spent 189.0 ms in RECONNECT_COMPLETE. Now in CHECKING
node4 6m 3.590s 2025-12-04 17:34:18.585 755 INFO STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ]
node4 6m 3.594s 2025-12-04 17:34:18.589 756 DEBUG STARTUP <<platform-core: MetricsThread #0>> LegacyCsvWriter: CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ]
node4 6m 4.279s 2025-12-04 17:34:19.274 757 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:0 H:df1d016f218d BR:772), num remaining: 3
node4 6m 4.280s 2025-12-04 17:34:19.275 758 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:1 H:8ce184490509 BR:772), num remaining: 2
node4 6m 4.281s 2025-12-04 17:34:19.276 759 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:2 H:66c14f51c093 BR:772), num remaining: 1
node4 6m 4.281s 2025-12-04 17:34:19.276 760 INFO STARTUP <<scheduler ConsensusEngine>> ConsensusImpl: Found init judge (CR:3 H:5a1e51a1b8ea BR:772), num remaining: 0
node4 6m 8.160s 2025-12-04 17:34:23.155 897 INFO PLATFORM_STATUS <platformForkJoinThread-3> StatusStateMachine: Platform spent 4.8 s in CHECKING. Now in ACTIVE
node2 6m 46.019s 2025-12-04 17:35:01.014 10069 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 872 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 6m 46.056s 2025-12-04 17:35:01.051 9881 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 872 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 6m 46.097s 2025-12-04 17:35:01.092 10045 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 872 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node1 6m 46.107s 2025-12-04 17:35:01.102 10037 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 872 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 6m 46.160s 2025-12-04 17:35:01.155 1785 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 872 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 6m 46.270s 2025-12-04 17:35:01.265 9884 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 872 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/872
node0 6m 46.271s 2025-12-04 17:35:01.266 9885 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 872
node3 6m 46.275s 2025-12-04 17:35:01.270 10048 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 872 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/872
node3 6m 46.276s 2025-12-04 17:35:01.271 10049 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 872
node2 6m 46.340s 2025-12-04 17:35:01.335 10072 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 872 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/872
node2 6m 46.341s 2025-12-04 17:35:01.336 10073 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for 872
node0 6m 46.353s 2025-12-04 17:35:01.348 9916 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 872
node0 6m 46.355s 2025-12-04 17:35:01.350 9917 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 872 Timestamp: 2025-12-04T17:35:00.166156Z Next consensus number: 26827 Legacy running event hash: 966b07c749b1351074e831aa9c859d529639f9399a5016b54ddc357a05d0c07f0430648c3ebd8a82516f45d832bab240 Legacy running event mnemonic: crawl-wedding-curtain-pear Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1249228951 Root hash: b412d402343c37e10918cf8169bff2196aa38ca680ef099b279286dc1c20b62f970b20a2c81b1781bc94971121178f49 (root) VirtualMap state / lava-lemon-maple-wire {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"violin-spend-soul-antenna"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"trap-bring-memory-blossom"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"tone-choice-draw-tragic"}}}
node3 6m 46.358s 2025-12-04 17:35:01.353 10080 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 872
node3 6m 46.360s 2025-12-04 17:35:01.355 10081 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 872 Timestamp: 2025-12-04T17:35:00.166156Z Next consensus number: 26827 Legacy running event hash: 966b07c749b1351074e831aa9c859d529639f9399a5016b54ddc357a05d0c07f0430648c3ebd8a82516f45d832bab240 Legacy running event mnemonic: crawl-wedding-curtain-pear Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1249228951 Root hash: b412d402343c37e10918cf8169bff2196aa38ca680ef099b279286dc1c20b62f970b20a2c81b1781bc94971121178f49 (root) VirtualMap state / lava-lemon-maple-wire {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"violin-spend-soul-antenna"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"trap-bring-memory-blossom"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"tone-choice-draw-tragic"}}}
node0 6m 46.363s 2025-12-04 17:35:01.358 9926 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+32+18.967805924Z_seq1_minr473_maxr5473_orgn0.pces
node0 6m 46.365s 2025-12-04 17:35:01.360 9927 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 845 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+32+18.967805924Z_seq1_minr473_maxr5473_orgn0.pces
node0 6m 46.366s 2025-12-04 17:35:01.361 9928 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 6m 46.367s 2025-12-04 17:35:01.362 10082 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+32+18.881235278Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 6m 46.369s 2025-12-04 17:35:01.364 10083 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 845 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+32+18.881235278Z_seq1_minr474_maxr5474_orgn0.pces
node3 6m 46.370s 2025-12-04 17:35:01.365 10084 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 6m 46.373s 2025-12-04 17:35:01.368 9929 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 6m 46.373s 2025-12-04 17:35:01.368 9930 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 872 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/872 {"round":872,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/872/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 46.373s 2025-12-04 17:35:01.368 1788 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 872 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/872
node4 6m 46.374s 2025-12-04 17:35:01.369 1789 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for 872
node0 6m 46.375s 2025-12-04 17:35:01.370 9931 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/194
node3 6m 46.377s 2025-12-04 17:35:01.372 10085 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 6m 46.377s 2025-12-04 17:35:01.372 10086 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 872 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/872 {"round":872,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/872/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 6m 46.379s 2025-12-04 17:35:01.374 10087 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/194
node1 6m 46.410s 2025-12-04 17:35:01.405 10040 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 872 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/872
node1 6m 46.411s 2025-12-04 17:35:01.406 10041 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 872
node2 6m 46.418s 2025-12-04 17:35:01.413 10112 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for 872
node2 6m 46.420s 2025-12-04 17:35:01.415 10113 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 872 Timestamp: 2025-12-04T17:35:00.166156Z Next consensus number: 26827 Legacy running event hash: 966b07c749b1351074e831aa9c859d529639f9399a5016b54ddc357a05d0c07f0430648c3ebd8a82516f45d832bab240 Legacy running event mnemonic: crawl-wedding-curtain-pear Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1249228951 Root hash: b412d402343c37e10918cf8169bff2196aa38ca680ef099b279286dc1c20b62f970b20a2c81b1781bc94971121178f49 (root) VirtualMap state / lava-lemon-maple-wire {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"violin-spend-soul-antenna"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"trap-bring-memory-blossom"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"tone-choice-draw-tragic"}}}
node2 6m 46.428s 2025-12-04 17:35:01.423 10114 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+32+18.973803562Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 46.428s 2025-12-04 17:35:01.423 10115 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 845 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+32+18.973803562Z_seq1_minr474_maxr5474_orgn0.pces
node2 6m 46.429s 2025-12-04 17:35:01.424 10116 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node2 6m 46.436s 2025-12-04 17:35:01.431 10117 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 6m 46.436s 2025-12-04 17:35:01.431 10118 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 872 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/872 {"round":872,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/872/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 6m 46.438s 2025-12-04 17:35:01.433 10119 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/194
node1 6m 46.486s 2025-12-04 17:35:01.481 10080 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 872
node4 6m 46.486s 2025-12-04 17:35:01.481 1834 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for 872
node1 6m 46.488s 2025-12-04 17:35:01.483 10081 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 872 Timestamp: 2025-12-04T17:35:00.166156Z Next consensus number: 26827 Legacy running event hash: 966b07c749b1351074e831aa9c859d529639f9399a5016b54ddc357a05d0c07f0430648c3ebd8a82516f45d832bab240 Legacy running event mnemonic: crawl-wedding-curtain-pear Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1249228951 Root hash: b412d402343c37e10918cf8169bff2196aa38ca680ef099b279286dc1c20b62f970b20a2c81b1781bc94971121178f49 (root) VirtualMap state / lava-lemon-maple-wire {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"violin-spend-soul-antenna"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"trap-bring-memory-blossom"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"tone-choice-draw-tragic"}}}
node4 6m 46.488s 2025-12-04 17:35:01.483 1835 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 872 Timestamp: 2025-12-04T17:35:00.166156Z Next consensus number: 26827 Legacy running event hash: 966b07c749b1351074e831aa9c859d529639f9399a5016b54ddc357a05d0c07f0430648c3ebd8a82516f45d832bab240 Legacy running event mnemonic: crawl-wedding-curtain-pear Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1249228951 Root hash: b412d402343c37e10918cf8169bff2196aa38ca680ef099b279286dc1c20b62f970b20a2c81b1781bc94971121178f49 (root) VirtualMap state / lava-lemon-maple-wire {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"violin-spend-soul-antenna"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"trap-bring-memory-blossom"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"tone-choice-draw-tragic"}}}
node1 6m 46.495s 2025-12-04 17:35:01.490 10082 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+32+18.932327968Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 6m 46.497s 2025-12-04 17:35:01.492 10083 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 845 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+32+18.932327968Z_seq1_minr474_maxr5474_orgn0.pces
node4 6m 46.497s 2025-12-04 17:35:01.492 1836 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr386_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+34+18.708785868Z_seq1_minr747_maxr1247_orgn774.pces
node1 6m 46.498s 2025-12-04 17:35:01.493 10084 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 6m 46.498s 2025-12-04 17:35:01.493 1837 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 845 File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+34+18.708785868Z_seq1_minr747_maxr1247_orgn774.pces
node4 6m 46.498s 2025-12-04 17:35:01.493 1838 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 6m 46.502s 2025-12-04 17:35:01.497 1839 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 6m 46.503s 2025-12-04 17:35:01.498 1840 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 872 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/872 {"round":872,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/872/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 6m 46.505s 2025-12-04 17:35:01.500 10085 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 6m 46.505s 2025-12-04 17:35:01.500 10086 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 872 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/872 {"round":872,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/872/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 6m 46.505s 2025-12-04 17:35:01.500 1841 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/2
node1 6m 46.506s 2025-12-04 17:35:01.501 10087 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/194
node1 7m 46.328s 2025-12-04 17:36:01.323 11538 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1004 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node0 7m 46.336s 2025-12-04 17:36:01.331 11382 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1004 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node2 7m 46.399s 2025-12-04 17:36:01.394 11554 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1004 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node4 7m 46.423s 2025-12-04 17:36:01.418 3274 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1004 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 7m 46.439s 2025-12-04 17:36:01.434 11532 INFO STATE_TO_DISK <<scheduler TransactionHandler>> DefaultSavedStateController: Signed state from round 1004 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT
node3 7m 46.600s 2025-12-04 17:36:01.595 11535 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1004 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1004
node3 7m 46.601s 2025-12-04 17:36:01.596 11536 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 1004
node2 7m 46.632s 2025-12-04 17:36:01.627 11557 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1004 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1004
node2 7m 46.633s 2025-12-04 17:36:01.628 11558 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for 1004
node0 7m 46.656s 2025-12-04 17:36:01.651 11395 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1004 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1004
node0 7m 46.657s 2025-12-04 17:36:01.652 11396 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 1004
node3 7m 46.682s 2025-12-04 17:36:01.677 11575 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 1004
node3 7m 46.684s 2025-12-04 17:36:01.679 11576 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1004 Timestamp: 2025-12-04T17:36:00.416133Z Next consensus number: 31713 Legacy running event hash: 38fc153a7707fd980199ddf761a5733d7bd82d333645f6a707e6812784a8910b9fc905b2f9392593cdc398d5d9cf9cc4 Legacy running event mnemonic: buzz-point-thought-escape Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1923062302 Root hash: 6769caa2049ee59d261f821640a79c3bda2110a186b5f0f241f2ba5919a4b1fe1369acf475decf95c851a951f969d5d2 (root) VirtualMap state / unfold-profit-need-erase {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cement-garden-later-palm"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"entire-tuition-grant-crisp"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"enable-panther-body-element"}}}
node3 7m 46.691s 2025-12-04 17:36:01.686 11577 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+32+18.881235278Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+28+30.242973615Z_seq0_minr1_maxr501_orgn0.pces
node3 7m 46.691s 2025-12-04 17:36:01.686 11578 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 977 File: data/saved/preconsensus-events/3/2025/12/04/2025-12-04T17+32+18.881235278Z_seq1_minr474_maxr5474_orgn0.pces
node3 7m 46.692s 2025-12-04 17:36:01.687 11579 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node3 7m 46.702s 2025-12-04 17:36:01.697 11580 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node3 7m 46.703s 2025-12-04 17:36:01.698 11581 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1004 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1004 {"round":1004,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1004/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node3 7m 46.704s 2025-12-04 17:36:01.699 11582 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/326
node1 7m 46.707s 2025-12-04 17:36:01.702 11541 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1004 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1004
node1 7m 46.708s 2025-12-04 17:36:01.703 11542 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 1004
node4 7m 46.709s 2025-12-04 17:36:01.704 3277 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Started writing round 1004 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1004
node4 7m 46.710s 2025-12-04 17:36:01.705 3278 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for 1004
node2 7m 46.716s 2025-12-04 17:36:01.711 11589 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/59 for 1004
node2 7m 46.718s 2025-12-04 17:36:01.713 11590 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1004 Timestamp: 2025-12-04T17:36:00.416133Z Next consensus number: 31713 Legacy running event hash: 38fc153a7707fd980199ddf761a5733d7bd82d333645f6a707e6812784a8910b9fc905b2f9392593cdc398d5d9cf9cc4 Legacy running event mnemonic: buzz-point-thought-escape Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1923062302 Root hash: 6769caa2049ee59d261f821640a79c3bda2110a186b5f0f241f2ba5919a4b1fe1369acf475decf95c851a951f969d5d2 (root) VirtualMap state / unfold-profit-need-erase {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cement-garden-later-palm"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"entire-tuition-grant-crisp"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"enable-panther-body-element"}}}
node2 7m 46.727s 2025-12-04 17:36:01.722 11591 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+28+30.137087296Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+32+18.973803562Z_seq1_minr474_maxr5474_orgn0.pces
node2 7m 46.727s 2025-12-04 17:36:01.722 11592 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 977 File: data/saved/preconsensus-events/2/2025/12/04/2025-12-04T17+32+18.973803562Z_seq1_minr474_maxr5474_orgn0.pces
node2 7m 46.727s 2025-12-04 17:36:01.722 11593 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 7m 46.736s 2025-12-04 17:36:01.731 11427 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 1004
node0 7m 46.738s 2025-12-04 17:36:01.733 11428 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1004 Timestamp: 2025-12-04T17:36:00.416133Z Next consensus number: 31713 Legacy running event hash: 38fc153a7707fd980199ddf761a5733d7bd82d333645f6a707e6812784a8910b9fc905b2f9392593cdc398d5d9cf9cc4 Legacy running event mnemonic: buzz-point-thought-escape Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1923062302 Root hash: 6769caa2049ee59d261f821640a79c3bda2110a186b5f0f241f2ba5919a4b1fe1369acf475decf95c851a951f969d5d2 (root) VirtualMap state / unfold-profit-need-erase {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cement-garden-later-palm"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"entire-tuition-grant-crisp"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"enable-panther-body-element"}}}
node2 7m 46.739s 2025-12-04 17:36:01.734 11594 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node2 7m 46.739s 2025-12-04 17:36:01.734 11595 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1004 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1004 {"round":1004,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1004/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node2 7m 46.742s 2025-12-04 17:36:01.737 11596 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/326
node0 7m 46.745s 2025-12-04 17:36:01.740 11429 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+28+29.884959208Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+32+18.967805924Z_seq1_minr473_maxr5473_orgn0.pces
node0 7m 46.746s 2025-12-04 17:36:01.741 11430 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 977 File: data/saved/preconsensus-events/0/2025/12/04/2025-12-04T17+32+18.967805924Z_seq1_minr473_maxr5473_orgn0.pces
node0 7m 46.746s 2025-12-04 17:36:01.741 11431 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node0 7m 46.757s 2025-12-04 17:36:01.752 11432 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node0 7m 46.757s 2025-12-04 17:36:01.752 11433 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1004 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1004 {"round":1004,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1004/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node0 7m 46.759s 2025-12-04 17:36:01.754 11434 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/326
node1 7m 46.784s 2025-12-04 17:36:01.779 11573 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/57 for 1004
node1 7m 46.786s 2025-12-04 17:36:01.781 11574 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1004 Timestamp: 2025-12-04T17:36:00.416133Z Next consensus number: 31713 Legacy running event hash: 38fc153a7707fd980199ddf761a5733d7bd82d333645f6a707e6812784a8910b9fc905b2f9392593cdc398d5d9cf9cc4 Legacy running event mnemonic: buzz-point-thought-escape Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1923062302 Root hash: 6769caa2049ee59d261f821640a79c3bda2110a186b5f0f241f2ba5919a4b1fe1369acf475decf95c851a951f969d5d2 (root) VirtualMap state / unfold-profit-need-erase {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cement-garden-later-palm"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"entire-tuition-grant-crisp"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"enable-panther-body-element"}}}
node1 7m 46.792s 2025-12-04 17:36:01.787 11575 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+32+18.932327968Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+28+29.993437732Z_seq0_minr1_maxr501_orgn0.pces
node1 7m 46.792s 2025-12-04 17:36:01.787 11576 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 977 File: data/saved/preconsensus-events/1/2025/12/04/2025-12-04T17+32+18.932327968Z_seq1_minr474_maxr5474_orgn0.pces
node1 7m 46.792s 2025-12-04 17:36:01.787 11577 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node1 7m 46.803s 2025-12-04 17:36:01.798 11578 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node1 7m 46.804s 2025-12-04 17:36:01.799 11579 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1004 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1004 {"round":1004,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1004/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node1 7m 46.805s 2025-12-04 17:36:01.800 11580 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/326
node4 7m 46.835s 2025-12-04 17:36:01.830 3323 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/17 for 1004
node4 7m 46.837s 2025-12-04 17:36:01.832 3324 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Information for state written to disk:
Round: 1004 Timestamp: 2025-12-04T17:36:00.416133Z Next consensus number: 31713 Legacy running event hash: 38fc153a7707fd980199ddf761a5733d7bd82d333645f6a707e6812784a8910b9fc905b2f9392593cdc398d5d9cf9cc4 Legacy running event mnemonic: buzz-point-thought-escape Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -1923062302 Root hash: 6769caa2049ee59d261f821640a79c3bda2110a186b5f0f241f2ba5919a4b1fe1369acf475decf95c851a951f969d5d2 (root) VirtualMap state / unfold-profit-need-erase {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"cement-garden-later-palm"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"entire-tuition-grant-crisp"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"leisure-limit-doctor-adjust"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"enable-panther-body-element"}}}
node4 7m 46.844s 2025-12-04 17:36:01.839 3325 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 2 preconsensus files on disk.
First file: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+28+30.131990948Z_seq0_minr1_maxr386_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+34+18.708785868Z_seq1_minr747_maxr1247_orgn774.pces
node4 7m 46.844s 2025-12-04 17:36:01.839 3326 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Found 1 preconsensus event file meeting specified criteria to copy.
Lower bound: 977 File: data/saved/preconsensus-events/4/2025/12/04/2025-12-04T17+34+18.708785868Z_seq1_minr747_maxr1247_orgn774.pces
node4 7m 46.845s 2025-12-04 17:36:01.840 3327 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Copying 1 preconsensus event file(s)
node4 7m 46.851s 2025-12-04 17:36:01.846 3328 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> BestEffortPcesFileCopy: Finished copying 1 preconsensus event file(s)
node4 7m 46.852s 2025-12-04 17:36:01.847 3329 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> SignedStateFileWriter: Finished writing state for round 1004 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1004 {"round":1004,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1004/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload]
node4 7m 46.853s 2025-12-04 17:36:01.848 3330 INFO STATE_TO_DISK <<scheduler StateSnapshotManager>> FileUtils: deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/65
node3 7m 58.257s 2025-12-04 17:36:13.252 11845 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith1 3 to 1>> NetworkUtils: Connection broken: 3 <- 1
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-04T17:36:13.250284269Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node4 7m 58.258s 2025-12-04 17:36:13.253 3599 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith1 4 to 1>> NetworkUtils: Connection broken: 4 <- 1
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-04T17:36:13.252099075Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node3 7m 58.373s 2025-12-04 17:36:13.368 11846 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith0 3 to 0>> NetworkUtils: Connection broken: 3 <- 0
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-04T17:36:13.364098766Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more
node4 7m 58.373s 2025-12-04 17:36:13.368 3603 WARN SOCKET_EXCEPTIONS <<platform-core: SyncProtocolWith0 4 to 0>> NetworkUtils: Connection broken: 4 <- 0
com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-04T17:36:13.364170263Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more